![]() AIRCRAFT FLIGHT INFORMATION VISUALIZATION SYSTEM AND ASSOCIATED METHOD
专利摘要:
This system comprises a display device (14), a man-machine interface (18), a module (36) for dynamic generation of synthetic images each comprising a synthetic representation of the environment and a curve representative of a trajectory, said module (36) being configured to generate a first synthesis image centered around a first central point of interest, to control the display thereof, and to detect an action of modification of the central point of interest by an operator by intermediate of said man-machine interface (18). The generation module (36) is also configured to determine, according to said modification action, a second central point of interest, located along said curve, whatever said modification action, to generate a second synthesis image. centered around said second central point of interest, and to control the display thereof. 公开号:FR3036476A1 申请号:FR1501021 申请日:2015-05-19 公开日:2016-11-25 发明作者:Arnaud Branthomme;Igor Fain;Patrick Darses 申请人:Dassault Aviation SA; IPC主号:
专利说明:
[0001] The present invention relates to a system for displaying information relating to a flight of an aircraft, said system comprising a display device, a module, and a system for displaying information relating to a flight of an aircraft. dynamic image generation system, configured to generate synthetic images, each synthetic image comprising a synthetic representation of the environment located in the vicinity of a trajectory of the aircraft and a curve representative of a trajectory of the aircraft. aircraft, said curve being superimposed on said synthetic representation, said generation module being configured to generate a first synthesis image centered around a first central point of interest and to control the display, on said display device, of said first synthesis image, - a human machine interface, said generation module being configured to detecting an action of modification of the central point of interest by an operator via said man-machine interface. Such a system is for example intended to be installed in the cockpit of an aircraft to be associated with a cockpit display, or on the ground, especially in a ground station, for example in a mission preparation system. The display device is for example a head-down screen integrated in the dashboard of the cockpit, or a screen of a mission preparation system. To facilitate the piloting of the aircraft, and to give the pilot a global indication of the terrain structure situated opposite the aircraft, it is known to generate synthetic images of the landscape around the aircraft, notably from topographic databases, based on the current position of the aircraft determined by the navigation system of the aircraft. Synthetic images generally include a synthetic surface representation of the terrain. Such a visualization system allows the operator to represent the relief that may be around the aircraft, and may also allow the operator to move the point on which the image is centered to view areas of the aircraft. ground around the position of the aircraft, or to remove ambiguity. [0002] The synthetic images are, for example, three-dimensional images, representing the trajectory of the aircraft and the surrounding terrain according to a first type of perspective making it possible to provide the operator or the pilot with a clear representation of the situation of the aircraft by relation to its environment. Such images make it possible to improve the awareness of the operator's situation and to simplify his decision-making, in particular by avoiding the operator having to reconstruct mentally, from images seen from above and views of the operator. side, the necessary three-dimensional information. The synthetic images can also be viewed according to a second type of perspective, for example vertical and / or horizontal views of the aircraft trajectory, i.e. top or side views of that trajectory. Such images are for example two-dimensional. Such images are more particularly adapted for precision activities, in particular to visualize the vertical trajectory of the aircraft during a climb or descent phase, or to redefine flight plan passage points. Such a visualization system provides substantial assistance to the operator, but its handling can create an undesirable additional workload for the operator. In particular, when an operator moves the point on which the image is centered, this displacement is not constrained, and the operator may be led to visualize areas of the terrain remote from the position of the aircraft and its location. trajectory and of little interest. An object of the invention is therefore to provide an information display system relating to a flight of an aircraft that is capable of displaying at any time an image presenting the information relevant for an operator, while minimizing the load of the aircraft. work necessary to visualize such an image. For this purpose, the subject of the invention is a system of the aforementioned type, characterized in that said generation module is furthermore configured: to determine, according to said modifying action, a second central point of interest, situated along said curve, said second center point of interest being along said curve regardless of said modifying action, - for generating a second synthesis image centered around said second central point of interest, and - for controlling the display, on said display device, of said second synthesis image. [0003] The system according to the invention may comprise one or more of the following characteristics, taken in isolation or in any technically possible combination: said action for modifying the central point of interest comprises a displacement of an organ by the operator between a first position and a second position; said first central point of interest is located along said curve, and said modifying action of the central point of interest comprises a movement of a member by the operator between a first position and at least a second position in a a direction not parallel to the tangent of said curve to said first central point of interest; Said first central point of interest is located along said curve, and said generation module is configured to: determine, as a function of a displacement vector between said first position and said second position, a curvilinear distance on said curve between said first central point of interest and said second central point of interest, and for determining, from a position on the curve of said first central point of interest and said curvilinear distance, a position on the curve of said second central point of interest. said generation module is configured to determine said curvilinear distance as a function of said displacement vector and of a vector tangent to said curve to said first central point of interest, in particular as a function of a dot product between a projection of said displacement vector on a horizontal plane of said first synthesis image and said tangent vector; said synthetic images are three-dimensional images, said synthetic representation of the environment being a three-dimensional representation and said curve being a three-dimensional curve; said first synthetic image is viewed from a first point of view, and said module is configured to detect an action of rotation of the position of said point of view with respect to said first central point of interest in a vertical plane, respectively in a horizontal plane, said rotation action comprising a displacement of a member by an operator in a vertical direction, respectively in a horizontal direction, said generation module being further configured to determine, according to said rotational action, a point modified view, to generate a modified synthesis image viewed from said modified point of view, and to control the display, on said display device, of said modified synthetic image. said generation module is configured to display on said first synthetic image a vertical slide and / or a horizontal slide, and said rotational action comprises a movement of a member by an operator on said vertical slide, respectively on said slide horizontal, in a vertical direction, respectively in a horizontal direction. said man-machine interface comprises a tactile control device, said action for modifying the central point of interest comprises a displacement of said member by the operator between said first position and said second position on said tactile control device; display device comprises a touch screen, and comprises said touch control device. The invention also relates to a method for displaying information relating to a flight of an aircraft, said method being characterized in that it comprises the following successive steps: - display, on a display device, of a first synthetic image comprising a synthetic representation of the environment located in the vicinity of a trajectory of the aircraft and a curve representative of a trajectory of the aircraft, said curve being superimposed on said synthetic representation, said first image of synthesis being centered around a first central point of interest, - detection of an action of modification of the central point of interest by an operator via a human-machine interface, - determination, as a function of said modification action, a second central point of interest located along said curve, - generation of a second synthesis image centered around said curve; udit second central point of interest, - display, on said display device, said second synthesis image. The method according to the invention may comprise one or more of the following characteristics, taken in isolation or in any technically possible combination: said action for modifying the central point of interest comprises a displacement of an organ by a operator between a first position and a second position; said first central point of interest is located along said curve, and said modifying action of the central point of interest comprises a movement of a member by an operator between a first position and a second position in a direction not parallel to the tangent of said curve to said first central point of interest; said modifying action of the central point of interest comprises a movement of said member by the operator between said first position and said second position on a touch screen; said first central point of interest is situated along said curve, and the step of determining said second central point of interest comprises: a determination phase, as a function of a displacement vector between said first position and said second position, a curvilinear distance 10 on said curve between said first central point of interest and said second central point of interest, and - a determination phase, from a position on the curve of said first central point of interest. interest and said curvilinear distance, said second central point of interest. Said curvilinear distance is determined as a function of said displacement vector and of a vector tangent to said curve to said first central point of interest, in particular as a function of a scalar product between a projection of said displacement vector on a horizontal plane of said first synthetic image and said tangent vector; According to a second aspect, the invention relates to an information display system 20 relating to a flight of an aircraft, said system comprising: - a tactile control device; a module for dynamic generation of synthesis images, configured to generate synthetic images, each synthesis image comprising a representation of the environment located in the vicinity of a trajectory of the aircraft, said generation module being suitable generating a first synthesis image comprising a representation of the environment according to a first scale, and controlling the display, on a display device, of said first synthesis image, said generation module being configured to: detect a scaling action by an operator via said touch control device, said modifying action comprising moving two control members on said touch control device opposite two points of a surface of said device when a scaling action is detected, determine a factor of 35 modification of the scale of the computer image, said system being characterized in that said generation module is configured to determine the modification factor of the scale at each instant, during said modification action: a function of a distance between said points at said instant if said points are included in a first predefined area of said surface at said instant, and - if at least one of said points is not included in said first area at said instant, as a function of a period of maintaining said point outside said first zone. The system according to this second aspect may comprise one or more of the following characteristics, taken in isolation or in any technically possible combination: said generation module is further configured to determine a second scale by applying said modification factor at said first scale, and for generating a second synthesis image at said second scale, said synthesis images are three-dimensional synthetic images represented according to a first type of perspective, and the scale of each synthesis image is defined by a observation distance between a central point of interest on which said synthesis image is centered and a point of view from which said image is viewed, said first scale being defined by a first observation distance between a first central point of view, interest on which is centered the said first image and a first point of view of then which said first image is seen, said generation module is configured to determine a second observation distance by applying said modification factor to said first observation distance, and to generate a second synthesis image centered on a second point central point of interest and seen from a second point of view located at said second observation distance of said second central point of interest, - said computer-generated images are computer-generated images represented according to a second type of perspective, and the scale each synthesis image is defined by a ratio between an apparent dimension of said synthesis image when said synthesis image is displayed on said display device and a corresponding actual dimension of the environment represented on said synthesis image, said first scale being defined by a first ratio between an of said first synthesis image when said first computer image is displayed on said display device and a first actual corresponding dimension of the environment represented on said first synthesis image, said generation module is configured to determine a second report by applying said modification factor to said first report, and to generate a second synthesis image such as the ratio between the apparent dimension of said second synthesis image when said second synthesis image is displayed on said display device and a second synthesis image the actual corresponding dimension of the environment represented on said second synthesis image is equal to said second ratio, said generation module is configured to determine the scaling factor regardless of the distance between said points if at least one of said points is not included in said first zone, said generation module is configured to detect an initial positioning of said control members opposite two initial points, and to detect, in each moment, a position of the points in front of which said control members are positioned at this time, - said generation module is configured to determine, when said initial positioning is detected, said first zone, as a function of the position of said initial points, said first zone includes said initial points, - said generation module is configured to determine, when said initial positioning is detected, a first closed curve and a second closed curve situated inside said first closed curve, said first zone being formed by all the points situated at the interior of said first closed curve and outside said second courtyard be closed; said generation module is configured to determine said scale modification factor as a function of a distance between said points as long as said points are included in said first zone, then, when at least one of said points leaves said first zone; determining the scaling factor as a function of the holding time of said point outside said first zone; if said control members are arranged on said touch control device opposite points included in said first zone, said generation module is configured to determine the scaling factor as a function of a ratio between the distance between said initial points and the distance between these points, 3036476 8 if said control members are disposed on said touch control device opposite points included in said first zone, said scale modification factor is a strictly monotonous function of the difference between the distance between said points and the distance between said initial points, if at least one of said control members is disposed on said touch control device opposite a point not included in said first zone, said generation module is configured to determine the scale modification factor as a strictly monotonic function of the duration of maintaining said member opposite a point not included in said first zone, if at least one of said control members is disposed on said touch control device opposite a point not included in said first zone, said generation module is configured to determine the scale modification factor as a strictly increasing and convex function of the holding time of said member opposite a point not included in said first zone, if at least one of said control members is disposed on said touch screen opposite a point not included in said first zone, said generation module is configured to determine the scale modification factor as a strictly decreasing and concave function of the duration of retention of said organ with respect to a point not included in said first zone. [0004] The invention also relates to a method for displaying information relating to a flight of an aircraft, said method comprising: a step of generating a first synthesis image comprising a representation of the environment located in the vicinity of a trajectory of the aircraft according to a first scale, a step of displaying said first synthesis image on a display device, a step of detecting an action of modification of the scale by an operator by the intermediate of a touch control device, said modifying action comprising a movement of two control members on said touch control device opposite two points of a surface of said touch control device, a step of determining a scaling factor of the synthesis image, said modifying factor being determined according to a distance between said points if said po Intes are included in a first predefined area 35 of said touch control device, or, if at least one of said points is included in said first zone, according to a duration of maintaining said point outside said first zone. . This method may comprise one or more of the following features, taken singly or in any technically possible combination: the method further comprises a step of determining a second scale by applying said modifying factor to said first scale , and a step of generating a second synthesis image at said second scale, said synthesis images are three-dimensional synthetic images represented according to a first type of perspective, and the scale of each synthesis image is defined by a observation distance between a central point of interest on which said synthesis image is centered and a point of view from which said image is viewed, said first scale being defined by a first observation distance between a first central point of interest on which is centered said first image and a first point of view from which said Since the first image is viewed, said determining step comprises determining a second viewing distance by applying said modification factor to said first viewing distance, and said second imaging image is centered on a second central point of view. interest and view from a second point of view located at said second observation distance of said second central point of interest, - said computer images are computer images represented according to a second type of perspective, and the scale of each image synthesis is defined by a ratio between an apparent dimension of said synthesis image and a real corresponding dimension of the environment represented on said synthesis image, said first scale being defined by a first ratio between an apparent dimension of said first image; computer generated image and a first real corresponding dimension of the environment r shown in said first synthesis image, said determining step comprises determining a second ratio by applying said modifying factor to said first ratio, and said second synthesis image is such that the ratio of the apparent dimension of said second image and a second actual corresponding dimension of the environment shown in said second synthesis image is equal to said second ratio, said method comprises a first step of determining a scaling factor of the image. synthesis according to a distance between said points, said points being included in said first predefined area of said synthesis image, then, at least one of said points being located outside said first zone, a second step of determining a factor modifying the scale of the computer image according to the d urea maintaining said point outside said first zone. According to a third aspect, the invention relates to a system for displaying information relating to a flight of an aircraft, said system comprising a module for dynamic generation of synthetic images, configured to generate three-dimensional synthetic images. each three-dimensional synthetic image comprising a representation of the environment located in the vicinity of a trajectory of the aircraft, and a curve representative of a portion of the trajectory of the aircraft, said curve being superimposed on said synthetic representation, each three-dimensional synthesis image being viewed from a given point of view, said system being characterized in that said generation module is configured to determine an optimal position of said point of view such as the length of the portion of the trajectory of the aircraft visible on a three-dimensional synthetic image viewed from a point of view at said optimum position is maximized. This system may comprise one or more of the following characteristics, taken separately or in any technically possible combination: said generation module is configured to generate an optimized three-dimensional synthesis image viewed from an optimal point of view located at said 25 optimal position and to control the display by a display device; each three-dimensional synthetic image being centered on a central point of interest, the position of a point of view is defined by: an observation distance between said point of view and said central point of interest, and at least one angular position corresponding to an angle formed between a direction between said point of view and said central point of interest and a predetermined plane; said observation distance being fixed, said generation module is configured to determine an optimal angular position of said point of view such that the length of the portion of the flight path of the visible aircraft on a three-dimensional synthetic image from a viewpoint located at said viewing distance from the viewpoint and at said optimum angular position is maximized; the position of any point of view is defined by said observation distance between this point of view, a vertical angular position of said point of view, and a horizontal angular position of said point of view; said viewing distance and said vertical angular position being fixed, said generating module is configured to determine an optimum horizontal angular position of said viewpoint such that the length of the portion of the flight path of the aircraft visible on a picture of three-dimensional synthesis viewed from a viewpoint located at said viewing distance from the viewpoint, at said vertical angular position and at said optimum horizontal angular position is maximized; said horizontal angular position corresponds to an angle formed between the direction between said point of view and said central point of interest and a predefined vertical plane; said horizontal vertical angular position corresponds to an angle formed between the direction between said point of view and said central point of interest and a horizontal plane; each three-dimensional synthesis image is viewed according to a predefined fixed aperture angle; said optimum position is such that a set of predefined points of interest is visible on the three-dimensional synthesis image viewed from a viewpoint at said optimum position. to determine said optimum position, said generation module is configured to: determine a set of successive points of the trajectory, from an initial position of said point of view, iteratively determine a plurality of successive modified positions; from the point of view, each modified position being determined so that an additional point of said plurality of successive points is visible on a synthetic image viewed from a modified point of view at said modified position, the points of said plurality of successive points downstream of the trajectory with respect to said additional point remaining visible on said synthesis image; said generation module is configured to determine said plurality of successive modified positions from the point of view until no position modified from the point of view makes it possible to make an additional point 3036476 12 visible on a synthetic image of said plurality of successive points without at least one point of said plurality of successive points located downstream of the path relative to said additional point is no longer visible. The invention also relates to a method for displaying information relating to a flight of an aircraft, said method comprising the generation of a three-dimensional synthetic image comprising a representation of the environment located in the vicinity of a trajectory of the aircraft and a curve representative of a portion of the trajectory of the aircraft, said curve being superimposed on said synthetic representation, said three-dimensional synthesis image being viewed from a point of view, said method being characterized in that it comprises a step of determining an optimal position of said point of view, said optimum position being such that the length of the portion of the flight path of the aircraft visible on a three-dimensional synthetic image seen from a point of view at said optimal position is maximized. This method may comprise one or more of the following features, taken singly or in any technically possible combination: the method further comprises a step of displaying an optimized three-dimensional synthesis image viewed from a point of view; optimal view located at said optimal position by a display device; each three-dimensional synthetic image being centered on a central point of interest, the position of a point of view is defined by: an observation distance between said point of view and said central point of interest, and at least one angular position corresponding to an angle formed between a direction between said point of view and said central point of interest and a predetermined plane; said observation distance being fixed, the step of determining said optimum position comprises determining an optimum angular position of said viewpoint such that the length of the portion of the trajectory of the aircraft visible on a synthetic image three-dimensional view from a viewpoint located at said viewing distance from the viewpoint and at said optimum angular position is maximized; the position of any point of view is defined by said observation distance between this point of view, a vertical angular position of said point of view, and a horizontal angular position of said point of view; said viewing distance and said vertical angular position being fixed, the step of determining said optimum position comprises determining an optimum horizontal angular position of said viewpoint such as the length of the portion of the trajectory the aircraft visible on a three-dimensional synthetic image viewed from a viewpoint located at said viewing distance from the viewpoint, at said vertical angular position and at said optimum horizontal angular position is maximized; said step of determining said optimum position comprises: a phase of determining a set of successive points of the trajectory, starting from an initial position of said point of view, a phase of determining a modified position of the point of view, so that at the end of said phase, a first point of said plurality of successive points is visible on a synthetic image viewed from a modified point of view located at said modified position; said step of determining said optimal position further comprises a plurality of successive additional phases of determination of a position modified from the point of view, such that at the end of each of said phases, an additional point of said plurality of successive points is visible on a synthetic image viewed from a modified point of view located at said modified position, the points of said plurality of successive points located downstream of the trajectory with respect to said additional point remaining visible on said image of synthesis. The invention will be better understood on reading the description which follows, given solely by way of example and with reference to the drawings in which: FIG. 1 schematically illustrates a display system according to an embodiment of the invention; Figure 2 is a schematic view of a three-dimensional synthetic image according to a first type of perspective; - Figure 3 is a schematic view of a synthetic image according to a second type of perspective, seen from above; Figure 4 illustrates several examples of representations of symbolic objects, according to the first and second types of perspectives; FIG. 5 illustrates an exemplary representation of a symbolic object representative of the position of an aircraft according to the first and second types of perspective; FIG. 6 illustrates an example of three-dimensional representation of a symbolic object representative of a cloud and a storm cell according to the first type of perspective; FIG. 7 is a diagram illustrating the definition of zones on a screen of the system of FIG. 1 during a modification of the scale of the synthesis image; FIGS. 8 and 9 illustrate examples of functions used by the display system of FIG. 1 during a transition from an image according to the first type of perspective to an image according to the second type of perspective; and FIG. 10 is a block diagram illustrating the implementation of a display method according to one embodiment. A first system 10 for displaying information relating to a flight of an aircraft is illustrated schematically in FIG. 1. This system 10 is for example intended to be mounted in an aircraft, notably in a cockpit, intended for the crew. of the aircraft, in the cabin, or to 15 passengers of the aircraft. Alternatively, the system 10 may also be located on the ground, particularly in a ground station, and may be for the preparation of missions or remote control of an aircraft from the ground station. The system 10 comprises a central processing unit 12 and a display device 14. [0005] The display device 14 comprises a screen 16 and means for processing the graphic information, for example a graphics processor and an associated graphics memory. The graphics processor is adapted to process the graphical information stored in the graphics memory and to display on the screen 16 this information or a representation thereof. The system 10 further comprises a man-machine interface 18 for setting the parameters of the display on the display device 14 by an operator, for example a member of the crew of the aircraft, a passenger, or an operator at the ground. The man-machine interface 18 comprises, for example, a tactile control device, configured to detect the position of one or more members, hereinafter referred to as control members, on a surface of this tactile control device. In known manner, these control members may be a stylet or the fingers of an operator. Some touch control device technologies make it possible to detect the position of control members without there being a contact between the control member and the surface of the touch control device. Subsequently, the expression "on" a surface or "on" a screen should be understood as meaning "on or near" that surface or screen. In the remainder of the description, an embodiment will be considered in which this tactile control device and the screen 16 have a common shape, in the form of a touch screen. Thus, the man-machine interface 18 is configured to detect the position of one or more members, hereinafter referred to as control members, on the surface of the screen 16. In a known manner, these control members may be a stylet or the fingers of an operator. The central processing unit 12 is adapted to execute applications necessary for the operation of the system 10. The central processing unit 12 comprises for this purpose a processor 24 and one or more memories 26. [0006] The processor 24 is adapted to execute applications contained in the memory 26, in particular an operating system allowing the conventional operation of a computer system. The memory 26 comprises different memory areas containing in particular a map database 28, flight data 30 relating to a flight plan 20 of the aircraft, and applications intended to be executed by the processor 24. The flight data 30 include a trajectory provided for the aircraft, and a set of geographical points associated with the flight plan, which may be associated with constraints, including altitude, speed and time constraints, for example an altitude at above, below or to which the aircraft must fly. The memory 26 comprises an application 36 for dynamic generation of synthetic images, also referred to hereafter as module 36 for dynamic generation of synthesis images, for display by the display device 14. The module 36 for dynamic generation of synthetic images is configured to generate synthetic images representative of the environment located in the vicinity of the trajectory of the aircraft, and to control the display by the display device 14. The module 36 is also configured to detect modification actions by an operator, via the human-machine interface 18, of the generated synthesis images, in particular actions for modifying parameters of these images, and for generating modified synthetic images in response to such modification actions. The module 36 is configured to generate synthetic images according to a first type of perspective. [0007] Synthetic images according to the first type of perspective are three-dimensional synthetic images. The first type of perspective is preferably a conical perspective, that is, with vanishing point. As diagrammatically illustrated in FIG. 2, each synthetic image according to the first type of perspective, denoted 38, comprises a synthetic representation 42 of the environment located in the vicinity of the trajectory of the aircraft, in particular the terrain and the terrain. its relief. This representation may include aeronautical data such as airports and their landing strips and / or geographical landmarks such as cities, bodies of water (rivers, lakes, seas). [0008] Synthetic images according to the first type of perspective may be either egocentric, ie viewed from a point of view corresponding to the current position of the aircraft, for example a point of view located in the cockpit of the aircraft, ie exocentrées, that is to say views of a virtual camera, located at a point other than the current position of the aircraft. In particular, an exocentric image may correspond to an image that would be seen by a virtual camera located outside the aircraft and visualizing the aircraft. Subsequently, we will call the point of view Pv the point of the space from which an image is seen. The position of this point of view Pv corresponds to the position of the virtual camera mentioned above. The module 36 is also configured to generate synthetic images 25 according to a second type of perspective. The computer-generated images according to the second type of perspective are, for example, images viewed from an axonometric perspective, which has no vanishing point and which retains the relationships between any length taken along a direction of space and that same length measured on its representation on the image. Such a perspective is also called cylindrical, orthographic, parallel or orthonormal perspective. The synthetic images according to the second type of perspective are, for example, vertical projection views, making it possible to visualize the vertical trajectory of the aircraft, and / or horizontal projection views illustrating the horizontal trajectory of the aircraft. [0009] An example of a synthetic image 39a according to the second type of perspective, in horizontal projection, that is to say seen from above, is illustrated in FIG. 3. On this image is superimposed a synthetic image 39b. according to the second type of perspective, in vertical projection, that is to say viewed from the side. [0010] The visual impression of synthetic images from an axonometric perspective viewed from above or from side views is very similar to the visual impression of two-dimensional images viewed from above or from side views respectively. Thus, alternatively, the synthetic images according to the second type of perspective are real two-dimensional images, without depth. [0011] Preferably, in the synthesis images according to the first and second types of perspective, the vertical dimensions are represented on a scale larger than the horizontal dimensions. In particular, the terrain as well as all the objects represented on the computer-generated images are resized by a predefined factor, for example three, along the vertical axis, so as to make it easier for a user to perceive the altitude variations between the terrain, the aircraft and the different objects. Each computer image is centered on a point called thereafter central point of interest Pc. In particular, each synthetic image according to the first type of perspective is centered on a central point of interest Pc situated at an observation distance Z from the point of view Pv of the image. The point of view Pv and the central point of interest Pc define a direction forming with a horizontal plane a viewing angle subsequently called vertical angular position and denoted a ,. In particular, a zero vertical angular position is associated with a point of view lying in the horizontal plane containing the central point of interest Pc, a negative vertical angular position is associated with a point of view lying in the horizontal plane containing the central point of interest Pc, while a positive vertical angular position is associated with a point of view located above the horizontal plane containing the central point of interest Pc. The point of view Pv and the central point of interest Pc also define a direction forming with a predefined vertical plane, for example a vertical plane tangent to the trajectory of the aircraft, an angle of view subsequently called angular position. horizontal and noted ah. In particular, a zero horizontal angular position is associated with a point of view situated upstream of the central point of interest Pc along the direction of the trajectory, the direction formed between the point of view and the central point of interest being parallel to the vertical plane tangent to the trajectory of the aircraft. A horizontal angular position less than 3036476 18 90 degrees in absolute value is associated with a point of view located upstream of the central point of interest Pc according to the direction of the trajectory, while a horizontal angular position greater than 90 degrees in absolute value is associated with a point of view located downstream of the central point of interest Pc according to the direction of the trajectory. [0012] Each synthetic image according to the first type of perspective represents an observation volume substantially corresponding to a pyramid, hereinafter referred to as an observation pyramid, with a horizontal aperture angle denoted al and a vertical aperture angle denoted a2. . Each synthetic image according to the first type of perspective therefore represents' al '' a2`10 a zone of length Al = 2Z tan and of width A2 = 2Z tan. The ratio of λ 2, length A1, and width A2 is set according to the dimensions of the displayed image. Indeed, the computer images are intended to be displayed on a window of the screen 16, the length Lf and the width If are preferably fixed. Each synthetic image according to the second type of perspective also represents an area of length A1 and width A2. In the images of the second type of perspective that are viewed from the side, the width A2 actually corresponds to the height of the area shown. The summary images according to the second type of perspective represent the environment according to a given scale, which is defined as the ratio between the length Lf of the window on which the image is displayed and the actual length Al of the area represented on this image. At constant length Lf, the scale of an image according to the second type of perspective is thus defined by the actual length Al of the zone represented in this image. By extension, the term "scale" of an image according to the first type of perspective will be referred to as the ratio between the length Lf of the window on which the image is displayed and (the quantity 2Z tan corresponding to the actual length Al of the area shown on 2) this image. At a constant horizontal aperture angle, the scale of an image according to the first type of perspective is therefore defined by the distance Z between the point of view Pv and the central point of interest Pc. [0013] In general, the apparent size of an object will be referred to as the size of this object as displayed on the screen, and its actual size relative to the environment. Each synthetic image comprises a scale indicator 40. This scale indicator 40 is for example a disk whose apparent diameter is constant. Thus, the actual diameter of this disc, relative to the environment shown, varies according to the scale of the image. Thus, the scale of a synthetic image according to the first type of perspective or according to the second type of perspective is equal to the ratio between the apparent diameter of the disc 40, which is preferably constant, and the actual diameter of this disc , which varies according to the scale of the image. Preferably, the actual value of the diameter or radius of this disk relative to the environment shown is displayed, which allows a user to realize the scale of representation of the image. This disc 40 is centered on the current position of the aircraft. In addition, the disk 40 is preferably provided at its periphery with graduations 41 indicating a course relative to the current position of the aircraft. Each synthetic image furthermore comprises, when at least a portion of the trajectory of the aircraft is included in the zone represented on the synthesis image, a curve 44 representative of this portion of trajectory, this curve 44 being superposed 15 on the synthetic representation of the environment. Preferably, the trajectory portion is represented on the synthetic image in the form of a ribbon. Such a shape notably allows a user to perceive the roll associated with each point of the trajectory. The ribbon is for example a full colored ribbon. [0014] In addition, on the computer-generated images according to the first type of perspective, the actual width of this ribbon is, for example, constant. Thus, the apparent width of the ribbon displayed on the computer image at a given point of the trajectory is a function of the distance between this given point and the point of view of the computer image, which allows a user to perceive this distance. [0015] Preferably, a wire-like pattern is superimposed on this ribbon, for example in the form of two lines delimiting the width of the ribbon, and whose thickness is constant over the entire displayed image. Such a plot makes it possible to make the trajectory visible even at points very far from the point of view of the image. Further, the portion of the path closest to the point of view, i.e., located at a distance from the image point of view less than a first predetermined threshold distance, may be only by a line wire type. In this case, the colored ribbon preferably has an increasing transparency from a point of the trajectory located at a second threshold distance from the point of view, greater than the first threshold distance, to the point of the trajectory located at the first threshold distance. , for which the ribbon is completely transparent. Such transparency avoids overloading the computer image. Each computer image may also include symbolic objects. In particular, these objects are seen according to the first or second type of perspective depending on whether the synthetic image is viewed according to the first or second type of perspective respectively. These symbolic objects are for example representative of the position of crossing points, associated or not with constraints, altitude profile points associated with the trajectory of the aircraft, the position of the aircraft and / or objects that could interfere with the trajectory of the aircraft, for example clouds, storm cells or aircraft. A first symbolic object 46 illustrates a position of the aircraft along the trajectory. This is generally the current position of the aircraft, or a future position of the aircraft, in case of display of computer images representing a simulation of a flight or a flight phase. particular flight of the aircraft. The crossing points include passing points associated with a vertical stress, for example an altitude above, below or at which the aircraft is to fly. Elevation profile points are points specific to the flight path of the aircraft corresponding to a flight phase change. These points include in particular an end of climb point (noted TOC for Top of Climb in English), which corresponds to the point of transition between the climb phase and the cruise phase of the aircraft according to the planned trajectory, a starting point of descent (noted TOD for "Top of Descent" in English), from which the aircraft must begin its descent phase, and one or more points of change of cruise altitude (noted BOSC for Bottom of Step Climb in English). Preferably, the three-dimensional shape of each symbolic object according to the first type of perspective is chosen so as to be easily recognizable and distinguishable from the form of symbolic objects of different types, irrespective of the angle of view from which this object symbolic is visualized. In addition, this three-dimensional shape must also be chosen so that it can be viewed according to the second type of perspective without loss of visual cue for a user, especially during a transition between an image according to the first type of perspective and an image according to the second type of perspective, while remaining recognizable when viewed according to the second type of perspective. [0016] Moreover, each symbolic object can be extended on the synthetic images by a vertical line extending to the ground level or to a predetermined altitude, for example the current altitude of the aircraft. In addition, the images according to the first type of perspective advantageously represent the projected shadows of each symbolic object at ground level or on a predetermined altitude plane, which is for example the current altitude of the aircraft. The vertical line and the shadow associated with each symbolic object make it possible to provide the user with an improved perception of the three-dimensional position of this object. Symbolic objects representative of crossing points associated with altitude constraints, viewed from the side, differ from symbolic objects representative of waypoints not associated with altitude constraints and altitude profile points. In addition, the symbolic objects representative of crossing points associated with altitude constraints above, below or to which the aircraft must fly, respectively, differ from each other seen from the side. [0017] Thus, the crossing points not associated with altitude constraints, the crossing points associated with altitude constraints above, below or at which the aircraft must fly and altitude profile points. can be distinguished from each other on computer-generated images according to the second type of perspective, viewed from the side, illustrating the vertical trajectory of the aircraft. [0018] In addition, seen from above, the symbolic objects representative of crossing points differ from the symbolic objects representative of altitude profile points. Thus, the crossing points and the altitude profile points can be distinguished from each other on computer-generated images according to the second type of perspective, seen from above, illustrating the horizontal trajectory of the aircraft. [0019] By way of example, FIG. 4 shows examples of representation of symbolic objects according to the first type of perspective, as well as the representation according to the second type of perspective, seen from above and from the side, of these same objects. FIG. 4 thus shows a three-dimensional symbolic object 50 representative of a crossing point not associated with a constraint, according to the first type of perspective, as well as the representation of this object seen from above 50a and viewed from side 50b according to the second kind of perspective. Figure 4 also shows three-dimensional symbolic objects 52, 54 and 56 representative of crossing points respectively associated with an altitude above which the aircraft is to fly, at an altitude below which the aircraft is to fly, 3036476 22 and at an altitude to which the aircraft must fly, according to the first type of perspective, as well as representations of these objects seen from above 52a, 54a, 56a and viewed from side 52b, 54b and 56b, according to the second type of perspective. FIG. 4 further illustrates a three-dimensional symbolic object 58 representative of an elevation profile point, for example of the TOD, TOC or BOSC type, according to the first type of perspective, as well as representations of this object seen from above 58a and seen from the side 58b, according to the second type of perspective. Moreover, the three-dimensional shape according to the first type of perspective of the symbolic objects representative of the position of the aircraft and of the aircraft that could interfere with the trajectory of the aircraft is chosen so that the orientation of the aircraft aircraft, seen from the side, is quickly detectable. Preferably, the vertical line associated with a symbolic object representative of the position of an aircraft extends to the current altitude of the aircraft, and the shadow of such an object is the shadow projected on a plan at the current altitude of the aircraft, which facilitates the comparison between the current altitude of the aircraft 15 and the altitude of surrounding aircraft. FIG. 5 thus illustrates an example of a three-dimensional representation 60 according to the first type of perspective of a symbolic object representative of the position of an aircraft, with which are associated a vertical line 62 and a projected shadow 64. Also shown are in Figure 5 representations of this object viewed from above 66 20 and viewed from side 68 according to the second type of perspective. Preferably, when an aircraft is capable of interfering with the trajectory of the aircraft, the module 36 is configured to thicken and / or highlight, for example in red, the portion of the trajectory concerned, as illustrated in FIG. 69. Clouds and thunderstorm cells are represented in scale in the synthetic images, in particular from meteorological information received from a meteorological radar, in particular three-dimensional radar, disposed in the aircraft, or from a station on the ground. As illustrated in FIG. 6, the clouds and thunderstorm cells are represented on the three-dimensional synthesis images according to the first type of perspective in the form of three-dimensional colored masses 70, 72 respectively, which are preferably transparent, so as not to not hide objects, such as an aircraft, an elevation profile crossing point, or a portion of a path, which would be located behind or inside the cloud or cloud cell. Preferably, one or more sectional views 70a, 72a of the cloud or thunderstorm cell are superimposed on the associated three-dimensional mass to allow the user to represent the size and the internal structure. cloud or storm cell. As illustrated in Figure 6, it is for example horizontal or vertical sections, according to a plane which intersects the trajectory of the aircraft, and which can be adjustable by the user. [0020] In the synthetic images according to the second type of perspective, the clouds and storm cells are represented in the form of a colored mass, preferably transparent. Thus, the display on the synthetic images of objects representative of clouds or thunderstorm cells makes it possible to identify possible cloud or storm cell interferences with the trajectory of the aircraft, and to modify the trajectory of the aircraft. aircraft to avoid them. According to the first type of perspective, the apparent size of the objects depends on the distance between these objects and the point of view. Thus, the apparent size of these symbolic objects allows the user to be aware of the distance of these objects, including the distance of the points, aircraft, clouds or thunderstorm cells represented by these objects. Preferably, when the distance between a symbolic object and the viewpoint is between a predetermined minimum distance and a predetermined maximum distance, the apparent size of the symbolic object is a strictly decreasing, eg linear, distance function. between the symbolic object and the point of view. On the other hand, when the distance between the symbolic object and the point of view is less than the predetermined minimum distance, the apparent size of the object remains constant and equal to the apparent size that the symbolic object would have if the distance between The symbolic object and the point of view was equal to the predetermined minimum distance. Preferably, a transparency effect is also applied to the object. This makes it possible to prevent an object very close to the point of view from obscuring the field of view. In addition, when the distance between the symbolic object and the viewpoint is greater than the predetermined maximum distance, the apparent size of the object remains constant and equal to the apparent size that the symbolic object would have if the distance between the symbolic object and the viewpoint was equal to the predetermined maximum distance. This makes it possible to keep visible any object located in the field of vision even if this object is very far from the point of view. The predetermined minimum and maximum distances are, for example, parameterizable and can be modified by a user. [0021] Thus, the module 36 is configured to apply a resizing factor to each symbolic object according to its distance to the viewpoint. This resizing factor is representative of the actual size of the object relative to the environment. [0022] When the distance between a symbolic object and the viewpoint is between the predetermined minimum distance and the predetermined maximum distance, the resizing factor is equal to 1, which means that the object is represented at its nominal size relative to to the environment. When the distance between the symbolic object and the viewpoint is less than the predetermined minimum distance, the resizing factor is less than 1, and is a strictly increasing function, for example linear, of the distance between the object and the point of view. Thus, when this distance decreases, the actual size of the object relative to the environment decreases. When the distance between the symbolic object and the viewpoint is greater than the predetermined maximum distance, the resizing factor is greater than 1, and is a strictly increasing function, for example linear, of the distance between the object and the point of view. Thus, as this distance increases, the actual size of the object relative to the environment increases. When the distance between the symbolic object and the viewpoint is less than the minimum distance predetermined or greater than the predetermined maximum distance, the resizing factor is for example equal to the ratio between the distance of the object from the point of view and the predetermined minimum or maximum distance respectively. The synthesis images are thus generated by the module 36 as a function of image parameters which define in particular: the type of perspective of the image, the position of the central point of interest Pc, for the images according to the first type from perspective, the position of the point of view Pv, in particular its observation distance Z at the central point of interest Pc, the horizontal angular position ah and the vertical angular position av, as well as the opening angles a1 and a2, the scale of the image, which is defined, for the images according to the first type of perspective, by the observation distance Z, and for the images according to the second type of perspective, by the actual length Al of the zone represented on these images. Preferably, according to the first type of perspective, all positions from the point of view of Pv are not allowed. For example, the horizontal and vertical angular positions are each within a predefined allowed angular range. For example, the horizontal angular position ah is between -90 degrees and 90 degrees, and the vertical angular position av is between -15 degrees and 90 degrees. The specified parameters can be set by default. [0023] In particular, the synthesis image can be viewed by default according to the first type of perspective. In addition, the horizontal aperture angle is for example set at 90 degrees by default, the vertical aperture angle then being adapted according to the length and width of the displayed image. [0024] The vertical angular position av can also be set by default, for example at a value of 30 degrees. Furthermore, the observation distance Z between the central point of interest Pc and the point of view Pv can be fixed by default, in particular such that a set of predetermined points, hereinafter called set of points interest, can be fully included in the observation pyramid. The module 36 is furthermore configured to automatically determine an optimum position from the point of view Pv making it possible to optimize the portion of trajectory visualized on the image. In particular, a distance Zo between the point of view Pv and the central point of interest Pco being fixed, and a vertical angular position αs being fixed, the module 36 is configured to automatically determine a position from the point of view Pv, located at the distance Zo from the central point of interest Pco and located at the vertical angular position α allowing to maximize the actual length of the portion of trajectory visualized on the image, the opening angles a1 and a2 remaining fixed. Furthermore, the vertical angular position α is, for example, set at 30 degrees. [0025] To determine an optimal horizontal angular position denoted ahot, the module 36 is configured to determine a set of successive points on the trajectory, denoted Pi, according to a predetermined sampling, from an initial point which corresponds, for example, to the position of the aircraft. preferably in the downstream direction of this trajectory. Indeed, the points of interest of the trajectory for an operator are generally those which have not yet been reached by the aircraft. For example, the points Pi are regularly spaced along the path. The module 36 is furthermore configured to determine an optimal vertical angular position making it possible to optimize the number of points Pi included in the pyramid of vision, the points Pi of the trajectory closest to the initial point being priority with respect to the points Pi of the trajectory farther from the initial point. [0026] For example, the module 36 is configured to successively adjust the angular position, from a starting horizontal angular position aho, so as to successively include the points Pi in the observation pyramid, while maintaining in the pyramid of observation all points of interest. [0027] To this end, the module 36 is configured to implement iteratively successive phases of determination of a modified horizontal angular position ahi so as to successively include in the observation pyramid points of the path Pi successive. Thus, during a first of these iterative phases, the module 36 is configured to determine a first modified horizontal angular position ahi. For this purpose, the module 36 is configured to determine a modified horizontal angular position such that the point Pi is included in the observation pyramid, preferably so that the edge of the observation pyramid closest to the point Pi before modification the initial horizontal angular position aho intersects the point P1 when the horizontal angular position 15 is equal to this modified horizontal angular position. If this modified horizontal angular position is not within the authorized angular range predefined for the horizontal angular position, for example not between -90 degrees and 90 degrees, the module 36 is able to choose as the first horizontal angular position modified ahi the limit of this allowed range closest to the modified horizontal angular position thus determined. If the modified horizontal angular position is within the authorized angular range predefined for the horizontal angular position, the module 36 is adapted to choose as the first modified horizontal angular position ahi this modified angular position. [0028] Then, in each subsequent phase, the module 36 is configured to determine a modified horizontal angular position. For this purpose, the module 36 is configured to determine a modified horizontal angular position such that the point Pi is included in the observation pyramid, preferably so that the edge of the observation pyramid closest to the point Pi before modification the modified horizontal angular position ah; 30 determined during the previous iteration intersects the point P, when the horizontal angular position is equal to this modified horizontal angular position. Likewise, if the modified horizontal angular position is not within the predefined angular range predefined for the horizontal angular position, the module 36 is adapted to choose as the new modified horizontal angular position ah, the limit of this most authorized range. close to the determined angular position. [0029] If the modified horizontal angular position is within the allowed angular range predefined for the horizontal angular position, the module 36 is adapted to choose as the new modified horizontal angular position av, this modified angular position. [0030] At each phase, the module 36 is configured to end the iteration sequence if, during this iteration, it is not possible to find a horizontal angular position such that the considered point Pi of the trajectory is included in the pyramid observation without other points P1, ... P.-1 of the trajectory or points of all the points of interest do not come out of the pyramid observation. [0031] The optimum horizontal angular position ahopt is then selected by the module 36 as the last modified angular position ah, _i determined. The image parameters can also be set by an operator, by means of modification actions of the displayed synthesis image carried out via the human machine interface 18. [0032] Such modification actions are performed by an operator, via the man-machine interface 18. These modification actions may notably consist of an action of modifying the perspective of the synthesis image, a modifying action the position of the point of view, an action of modification of the central point of interest, or an action of modifying the scale of the image, which corresponds, in the case of an image according to the first type of perspective, to a modification of the observation distance between the point of view and the central point of interest, or in the case of an image according to the second type of perspective, to a modification of the actual size of the zone represented . [0033] The module 36 is configured to detect such modification actions, and to generate modified synthetic images in response to such modification actions. In order to facilitate the realization of some of these actions by an operator, the generation module 36 is able to superimpose on the synthesis images one or more objects, each associated with a specific modification action, and each indicating a zone of the object. image in which the modification action is to be performed, as described below. In particular, the module 36 is adapted to display on each synthesis image an icon 80 forming an actuatable button, whose operation is intended to modify the central point of interest of the image, in order to take as a new central point 35 of interest the current position of the aircraft. [0034] This actuation is carried out by means of the man-machine interface 18, for example by positioning a control member on the area of the touch screen 16 displaying the icon 80. The icon 80 has for example a shape general aircraft. The module 36 is configured to detect an action of modification of the central point of interest, in order to pass from an initial image centered on a central point of interest Pco to a final image centered on a central point of interest. modified final Pcn. The module 36 is configured to determine the final modified central point of interest Pcn as a function of the detected modification action. In addition, the module 36 is configured to generate a final modified synthetic image centered on the final modified central point of interest Pcn and to control its display by the display device 14. Preferably, the modification of the central point of interest is carried out without modifying the distance Z between the point of view and the central point of interest. Such a modification therefore generally also results in a change of position from the point of view. The final modified synthesis image is then viewed from a modified point of view Pvn different from the point of view Pvo of the initial image. Moreover, this modification of the central point of interest is for example carried out without modifying the angles of view al and a2. In addition, the module 36 is configured to generate, at a plurality of successive transition times, a transition image between the initial image and the final image, with a view to displaying these successive transition images then of the final image. Each transition image generated at a given transition instant is centered on a central point of intermediate interest Pc, located between the initial central point Pco and the final modified central point Pcn and seen from a modified point of view Pv, located between the initial point of view Pvo and the final point of view Pvn. The module 36 is further configured to control the successive display of the transition images at a plurality of times between the display time of the initial image and the display time of the final image. An action of modification of the central point of interest can be of several types. A first type of modification action of the central point of interest includes the actuation of the icon 80, in order to focus the computer image on the current position of the aircraft. A second type of modification of the central point of interest consists in a selection of any target point of the computer image via the human machine interface 18, in order to choose this point as the central point of interest. This selection is for example made by positioning a controller on the touch screen 16 opposite the target point. [0035] The module 36 is configured to detect an action of modification of the central point of interest of the first or the second type and centered on the position of the aircraft or on the target point, to determine the final modified central point of interest. Pcn according to the detected modification action, to generate a final modified synthesis image centered on the final modified central point of interest Pc, and to control the display by the display device 14. In addition, as described above, the module 36 is configured to generate transition images at a plurality of successive transition times and to control the successive display of these transition images at a plurality of times between the transition images. instant of display of the initial image and the instant of display of the final image. A third type of modification action of the central point of interest comprises a movement of an organ by an operator between an initial position and a final position. For example, this member is a control member, and the third type of action of modifying the central point of interest comprises a movement of this control member by an operator between an initial position and a final position on the screen 16. In a first mode, such displacement is intended to cause a corresponding displacement of the central point of interest on the computer image. [0036] According to a second mode, such a displacement is intended to cause a displacement of the central point of interest on the synthetic image along the trajectory of the aircraft. According to the second mode, the central point of interest remains along the path independently of the modifying action, in particular the movement of the member by the operator, in particular when this movement is made in a non-parallel direction. , thus secant, at the tangent to the trajectory curve at the initial point of interest. The choice of the first mode or the second mode may for example be carried out by an operator via the man-machine interface 18. When the first mode or the second mode is activated, the module 36 is configured to detect a displacement. a member between an initial position and a final position, in particular a movement of a control member by an operator on the touch screen 16 between an initial position and a final position. The module 36 is configured to detect at each instant during this displacement an intermediate position of the member between its initial position and its final position, as well as an intermediate displacement vector between the initial position 3036476 and the intermediate position. At the end of the movement of the organ by the operator, the module 36 is configured to determine a final displacement vector of the member between its initial position and its final position. When the first mode is activated, the module 36 is configured to determine at each instant, during the displacement of the member, a translation vector for the central point of interest as a function of the displacement of the member between its initial position. and its intermediate position at this time and to determine an intermediate modified central point of interest Pc ,, by applying the displacement vector to the initial central point of interest Pco. [0037] For example, the translation vector for the central point of interest is determined at each instant as the component in a horizontal plane of the synthesis image of the vector moving the organ between its initial position and its intermediate position. The module 36 is further configured to determine a final translation vector for the central point of interest as a function of the final displacement vector and to determine a final modified central point of interest Pcn, by applying the final displacement vector to the central point. of initial interest Pco. The final translation vector is for example determined as the component in a horizontal plane of the synthesis image of the final displacement vector of the organ between its initial position and its final position. [0038] When the second mode is activated, the synthesis image is centered by default on a central point of interest located along the curve 44 representative of the trajectory of the aircraft. The initial central point of interest is thus located along this trajectory curve. When the second mode is activated, the module 36 is configured to determine at each instant, during the displacement of the organ, an intermediate modified point of interest Pc, which is situated along the curve representative of the trajectory of the aircraft, regardless of the displacement of the member between its initial position and its intermediate position. Furthermore, the module 36 is configured to determine, as a function of the modifying action, a final modified central point of interest which is situated along the curve representative of the trajectory of the aircraft, and that the displacement of the organ between its initial position and its final position. The second mode thus allows an operator to modify the central point of interest while remaining along the trajectory of the aircraft, and thus to visualize the terrain 35 located along this trajectory, without it being necessary that the operator moves the organ in a direction corresponding in each moment to the direction of the trajectory. In particular, the module 36 is configured to determine, at each instant during the modification action, from the displacement vector of the member between its initial position and its intermediate position, the component of this vector in a horizontal plane of the computer image. The module 36 is furthermore configured to determine at each instant, from this horizontal component, a curvilinear distance on the trajectory curve between the initial central point of interest Pco and an intermediate modified central point of interest Pci, then for determining an intermediate modified central point of interest Pci 10 by applying to the initial central point of interest Pco a displacement along the trajectory curve 44 of a length equal to the curvilinear distance thus determined. For example, the curvilinear distance is determined according to the horizontal component of the displacement vector and a vector tangent to the curve at the initial central point of interest, in particular as a function of a dot product between the horizontal component and the vector. tangent. At the end of the displacement of the organ by the operator, the module 36 is configured to determine, from the final displacement vector of the organ between its initial position and its final position, the component of this vector final displacement in a horizontal plane of the computer image. The module 36 is further configured to determine, from this horizontal component, a curvilinear distance on the trajectory curve between the initial central point of interest Pco and the final modified central point of interest Pcn, and then to determine the final modified central point of interest Pc, by applying to the initial central point of interest a displacement along the trajectory curve of a length equal to the curvilinear distance thus determined. [0039] According to the first and second modes, the module 36 is configured to generate at each instant an intermediate modified synthesis image centered on the intermediate modified intermediate interest point Pci determined at this time and to control the display by the device. 14 of visualization. The module 36 is also configured to generate at the end of the displacement a final modified synthesis image centered on the final modified central point of interest Pcn and to control the display by the display device 14. Preferably, at the end of the movement of the member by the operator, the module 36 is configured to virtually extend this movement in order to add an effect of inertia to the movement of the organ by the operator. The module 36 is thus configured to determine one or more additional modified synthetic images to be displayed after the final modified synthesis image, each centered on an additional modified central point of interest determined on the basis of a virtual displacement beyond the actual final position of the organ at the end of its displacement. The generation module 36 is also configured to display on the synthetic images 5 according to the first type of perspective an icon 82 forming a vertical slide or an icon 84 forming a horizontal slide. The vertical slide 82 is associated with an action of modification of the angle of view in a vertical plane, that is to say to an action of modification of the position from the point of view of the image, this modification being a rotation of the point of view with respect to the central point of interest in the vertical plane containing the initial point of view and the central point of interest, that is to say a modification of the vertical angular position av of the point of view. view. This action of changing the vertical angular position av is carried out by means of the man-machine interface 18, for example by moving a control member over the area of the touch screen 16 displaying the vertical slide 82 from top to bottom or from bottom to top along this vertical slide 82. Notably, a downward movement along the vertical slide 82 is able to cause a rotation of the position of the point of view towards the bottom of the image, while moving up and down along the vertical slide 82 is able to rotate the viewpoint position upwardly in the image. The vertical slide 82 extends substantially vertically on the synthesis image between an upper stop 82a and a lower stop 82b, which are for example associated with the limits of the authorized range for the vertical angular position av. For example, the upstop 82a is associated with a vertical angular position av of 90 25 degrees, while the abutment 82b is associated with a vertical angular position av of -15 degrees. The horizontal slide 84 is associated with an action of modification of the angle of view in a horizontal plane, that is to say an action of modification of the position from the point of view of the image, this modification being a rotation of the point of view with respect to the central point of interest in the horizontal plane containing the initial point of view and the central point of interest, that is to say a modification of the horizontal angular position ah of the point of view. view. This action of modifying the horizontal angular position ah is carried out by means of the man-machine interface 18, for example by moving a control member 30 on the zone of the touch screen 16 displaying the horizontal slide 84 from left to right or from right to left along this slide 84. Notably, a left-to-right movement along the horizontal slide 84 is capable of causing the position of the viewpoint to rotate in a counter-clockwise direction. a right-to-left movement along the horizontal slide 84 is capable of rotating the clockwise position of the viewpoint. The horizontal slide 84 extends substantially horizontally on the synthesis image between a left stop 84a and a right stop 84b, which are for example associated with the limits of the permitted range for the horizontal angular position ah. For example, the left abutment 84a is associated with a horizontal angular position ah of -90 degrees, while the right abutment 84b is associated with a horizontal angular position ah of 90 degrees. Preferably, when no controller is positioned on the touch screen 16 on the area displaying the slider 82, the sliders 82 and 84 are displayed in transparency only. This makes it possible to avoid overloading the computer images with the slides 82 and 84 when their display is not necessary. Furthermore, as long as a positioning of a control member on the touch screen 16 on the zone displaying the vertical slide 82 or horizontal 84 is detected, the module 36 is able to superpose on the vertical slide 82 or horizontal 84 20 respectively a marker signaling the current position of the control member on the area displaying the slide 82 or 84. This marker is for example a horizontal or vertical line cutting the vertical slide 82 or horizontal 84 respectively. The vertical 82 and horizontal 84 slides are each associated with a predetermined scale of rotation. In particular, a given position along the vertical slideway 82, respectively along the horizontal slide 84, is associated with a vertical angular position av, respectively at a given horizontal angular position ah. The module 36 is configured to detect an angle of view modification action in a horizontal or vertical plane, and to determine in real time, at each instant during this displacement, a modified horizontal or vertical angular position as a function of the position of the member on the slideway 82 or 84. The module 36 is furthermore configured to determine a modified point of view at the horizontal or vertical angular position thus modified, and to generate a modified synthetic image seen from the point of view. modified view thus determined. [0040] The module 36 is furthermore configured to detect a scaling action of the image. A scaling action corresponds, for the images according to the first type of perspective, to a modification of the observation distance Z between the point of view Pv and the central point of interest Pc. For the images according to the second type of perspective, a scaling action corresponds to a modification of the actual size of the zone represented, that is to say to a modification of the length Al and consequently of the A2 width of the area shown. In particular, an increase in the scale of the synthesis image 10 corresponds, for the images according to the first type of perspective, to a reduction in the observation distance Z, and for the images according to the second type from perspective, to a decrease in the length A1 and the width A2 of the zone represented. Conversely, an action of reduction of the scale of the synthesis image corresponds, for the images according to the first type of perspective, to an increase in the observation distance Z, and for the images according to the second type of perspective, an increase in length A1 and width A2 of the area shown. The module 36 is also configured to generate, in response to such a modification action, modified synthesis images, and to control the display of these modified synthetic images on the display device 14. [0041] A scaling action is performed by a user via the man-machine interface 18. In particular, such a modifying action comprises moving two control members on the touch screen 16 in two directions. substantially opposite directions, which can be followed by a maintenance of the two control members on the touch screen 16 at the end of their displacement. [0042] The module 36 is configured to detect the positioning, at an initial time, of two control members on the touch screen 16 at two initial positions associated with two distinct initial points P1 and P2, and to determine when this positioning is detected. a midpoint Pn, located midway between these two initial points, and a first zone 98, a second zone 100 and a third zone 102 centered on this midpoint. As illustrated in FIG. 7, the first, second and third zones 98, 100, 102 are defined by two closed curves C1, C2 centered on the midpoint Pm. In the example shown, the two closed curves C1, C2 each have the shape of a square of which one of the diagonals passes through the two initial points P1 and P2. In a variant, the two curves C1, C2 are of polygonal shape, round or oval, or have any curved shape. Each of the curves C1, C2 defines a set of points situated inside these curves. The two initial points P1 and P2 are included in the set of points defined by the second curve C2 but not included in the set of points defined by the first curve C1. [0043] The first zone 98, which includes the initial points P1 and P2, is formed by the set of points contained between the first curve C1 and the second curve C2. The second zone 100 is formed by all the points contained inside the second curve C2. This second zone 100 is associated with a downscaling action, as described below. [0044] The third zone 102 is formed of the points lying outside the curves C1 and C2. This third zone 102 is associated with a scaling action, as described below. An increase in the scale of the synthesis image comprises a displacement of the two control members on the touch screen 16 along a substantially rectilinear trajectory in two directions substantially opposite to one of the other, possibly followed by a maintenance of the two control members on the touch screen 16 at the end of their displacement. With reference to FIG. 7, such an increase in scale action comprises a movement of two control members from the initial points P1 and P2 in two opposite directions B, B 'away from the midpoint Pm. An action of increasing the scale therefore corresponds to an increase in the distance d between the two organs. A scaling action of the synthesis image comprises a displacement of the two members on the touch screen 16 along a substantially rectilinear path in two substantially opposite directions and directed toward each other, possibly followed by a holding of the two control members on the touch screen 16 at the end of their displacement. With reference to FIG. 7, such a scaling action comprises a displacement of two control members from the initial points P1 and P2 in two opposite directions C, C 'towards the midpoint Pm. A reduction action of the scale 30 therefore corresponds to a decrease in the distance d between the two control members. The module 36 is able to detect the displacements of the two control members and to determine at each instant, as a function of the position of these control members, a resizing factor of the initial image, hereinafter called the modification factor of the scale of the image. [0045] This scaling factor, noted y, is defined as a multiplying factor to be applied to a parameter of the initial image to determine a modified parameter associated with a modified scale. For example, for computer-generated images according to the first type of perspective, a multiplication of the scale by the factor y corresponds to a multiplication of the observation distance of the factor y to determine a modified observation distance. For the computer-generated images according to the second type of perspective, a multiplication of the scale by the factor y corresponds to a multiplication of the length A1 and the width A2 of the zone represented by the image of a factor y. [0046] When scaling down, the scaling factor is strictly greater than 1. When scaling up, the scaling factor is strictly between 0 and 1. The module 36 is configured to determine, at each instant noted t, the modification factor of the scale y, as a function of the position of the control members with respect to the first zone 98. In particular, the module 36 is configured to determining the scaling factor according to a first calculation mode as long as the control members remain positioned on the touch screen 16 opposite points within the first zone 98, and determining the factor positioning according to a second calculation mode when the control members are positioned on the touch screen 16 opposite points outside the first zone 98, that is to say within the second zone 98 zone 100 or of the third zone 102. As long as the control members remain positioned on the touch screen 16 with regard to points situated inside the first zone 98, the module 36 determines at each moment the modification factor of the y scale, depending on the distance between these control members at this time and the distance between the initial points P1 and P2. Preferably, the scaling factor y is a strictly decreasing function of the distance d between the control members, for example a linear function of the deviation or the ratio of the distance between the initial points. P1 and P2 and the distance d, between the control members at this time. By way of example, the modification factor of the scale y is determined according to a formula of the type: y; = k-, where k is a strictly positive proportionality factor. When the control members are positioned on the touch screen 16 opposite points outside the first zone 98, the module 36 determines at each instant t ', the modification factor of the scale, denoted y 'as a function of the duration of maintenance of the control members outside the first zone 98. This duration of maintenance denoted T, corresponds to the time elapsed between the moment noted t'0 at which one or two control members have reached the limits of the first zone 98 and the time t 'considered. Preferably, as long as the control members are positioned on the touch screen 16 opposite points outside the first zone 98, the scaling factor y is independent of the position of the points. screen located next to these control devices. At time t'o, the modification factor of the scale yo is equal to the modification factor of the scale determined according to the first calculation mode. Then, the scaling factor is a strictly monotonic function of the hold time T. In particular, if the control members are positioned on the touch screen 16 opposite points located inside the second zone 100, the scale modification factor is a strictly increasing function of the holding time T, conversely, if the control members are positioned on the touch screen 16 with respect to points situated at inside the third zone 102, the scale modification factor is a strictly decreasing function of the holding time T, thus, when the control members are located in the second zone 100 or in the third zone 102, the only holding the control members on the touch screen 16 makes it possible to continue the action of reducing or increasing the scale 25 respectively. It is thus possible for a user to resize the area represented by the image by the scaling factor of the desired scale without it being necessary, because of the finite dimensions of the touch screen 16, to perform several actions of successive amendments. Preferably, the absolute value of the derivative of the scaling factor y 'is an increasing function of time, which means that the change of scale is made more and more rapidly when the hold time T , increases. This allows in particular to move quickly from the scale of a city to the scale of a country or a continent, or conversely to quickly move from the scale of a continent to the scale of a continent. a country or city, in one gesture. [0047] In particular, when the control members are positioned on the touch screen 16 opposite points located in the second zone 100, the modification factor of the scale Yi is a convex function, in particular a strictly convex function, of the duration of the For example, the scaling factor Yi exponentially increases as the hold time T increases. In another example, the scaling factor yi is a piecewise affine function, the slope of the affine function being positive, increasing as the holding time Ti increases. When the control members are positioned on the touch screen 16 opposite points located in the third zone 102, the modification factor of the scale Yi is a concave function, in particular a strictly concave function, of the holding time Ti. For example, the scaling factor Yi decreases exponentially as the dwell time Ti increases. In another example, the modification factor of the scale y; is a piecewise affine function, the slope of the affine function, negative, decreasing as the holding time Ti increases. [0048] As indicated above, the module 36 is configured to detect a scaling action and to determine at each of a plurality of successive times, during such an action, a modification factor of the y or Y scale. Preferably, a minimum scale change factor and a maximum scale change factor are predetermined. When the modification factor of the scale y; where y'i attains the modification factor of the minimum or maximum scale, the modification factor of the scale y; or Yi remains equal to the modification factor of the minimum or maximum scale respectively, even if the distance di between the control members increases or decreases respectively, and even if the control members remain positioned on the touch screen 16 opposite points in the second or third zone. Furthermore, the module 36 is configured to apply, at each successive instants, the modification factor of the scale yi or Yi determined at this time on the scale of the initial synthesis image to determine a modified scale. In particular, according to the second type of perspective, a multiplication of the scale of the initial image by the scaling factor corresponds to a multiplication of the length A1 and the width A2 of the zone represented by the image of a factor y. According to the first type of perspective, a multiplication of the scale of the initial image by the scaling factor corresponds, at fixed aperture angle, to a multiplication of the distance Z between the point of view and the central point of interest by the factor y. The module 36 is further configured to generate, at each of these times, a modified modified scale image thus determined, and to control the display 5 on the display device 14. Preferably, when the modifying action corresponds to an increase in the scale, the modified image has as central point of interest the midpoint Pm. In a variant, the modified image retains the same central point of interest as the initial image. Likewise, when the modifying action corresponds to a reduction of the scale, the modified image has, for example, as its central point of interest, the midpoint Pm. In a variant, the modified image retains the same central point of interest as the initial image. As soon as the control members are no longer disposed on the touch screen 16, the action of modifying the scale of the synthesis image stops. Preferably, the module 36 is configured to compare the dimensions Aln and A2n or the distance Zn associated with the last modified image generated at predetermined size or distance thresholds, and to determine the size thresholds or the distance threshold respectively. , the closest to the dimensions Aln and A2n or the distance Zn. The module 36 is further configured to generate a final modified image representing a zone whose dimensions correspond to the nearest determined dimension thresholds and / or a distance Z equal to the determined distance threshold, and to control the display on the display device 14. Thus, the observation distance Z or the dimensions of the zone shown are magnetized over an observation distance or on predefined dimensions. [0049] The module 36 is furthermore configured to detect an action for modifying the type of perspective of the image, for example an action to switch from an image according to the first type of perspective to an image according to the second type of perspective, in particular top view or side view, or an action to move from an image according to the second type of perspective to an image according to the first type of perspective. The transition from an image according to the first type of perspective to an image according to the second type of perspective is for example desirable when the operator, for example a pilot, wishes to modify the flight plan, in particular one or more points of passage of the flight plan. A view according to the second type of perspective, from above or from the side, is indeed more suited to such a modification than a view according to the first type of perspective. [0050] This modifying action is performed by means of the human-machine interface 18, for example by actuating a dedicated icon superimposed on the synthesis image by the module 36. According to one example, an action to pass from a synthetic image According to the first type of perspective to a computer image according to the second type of perspective seen from above comprises a displacement of a control member on the area of the touch screen 16 displaying the vertical slide 82 from bottom to top along from this vertical slide 82 to the upper stop 82a. According to this example, the synthetic image according to the first type of perspective, hereinafter referred to as initial synthesis image, from which the transition to the synthesis image according to the second type of perspective is carried out, is preferably from a point of view having a vertical angular position equal to 90 °. It is therefore an image according to the first type of perspective seen from above. The module 36 is configured to detect a modification action aiming at moving from a synthesis image according to the first type of perspective to a computer image according to the second type of perspective or a computer image according to the second type of perspective. perspective to a synthetic image according to the first type of perspective, and for generating, in response to such an action, a plurality of successive three-dimensional transition images between the synthesis image according to the first type of perspective and the image synthesis according to the second type of perspective or between the synthetic image according to the second type of perspective and the synthesis image according to the first type of perspective respectively. Transition images are computer-generated images according to the first type of perspective. [0051] The image according to the second type of perspective is for example an image viewed from above. The transition images are intended to be displayed on the display device 14 in a plurality of successive transition instants, between an initial instant of display of the synthesis image according to the first type of perspective or according to the second type. respectively, and a final instant of display of the synthesis image according to the second type of perspective or according to the first type of perspective respectively. The transition images are intended to ensure a continuous and fluid transition between the synthetic image according to the first type of perspective and the synthetic image 35 according to the second type of perspective or between the synthetic image according to the second type of perspective. type of perspective and the computer-generated image according to the first type of perspective respectively. Each transition image is centered around a central point of interest Pc, hereinafter referred to as a central point of intermediate interest, is seen from a point of view Pvi 5 subsequently called an intermediate point of view, the distance Z observation between this intermediate point of view and the central point of intermediate interest being called intermediate observation distance, and is viewed at an intermediate horizontal aperture angle a1, and an intermediate vertical aperture angle a2 ,. Each transition image represents an area of the environment of length Al, called intermediate length and width A2, called intermediate width, the ratio between intermediate length A1 and intermediate width A2, remaining constant and equal to the ratio between length A1 and the width A2 of the three-dimensional synthesis image. The angles of horizontal aperture A1, and vertical a2, intermediate being related to each other as a function of the ratio between the intermediate length A1, and the intermediate width A2; which remains constant, we will generally designate by "opening angle" one or the other of these opening angles, for example the horizontal angle of aperture intermediate al ,. During a transition between an initial synthesis image according to the first type of perspective and a final synthesis image according to the second type of perspective, the module 36 is configured to generate three-dimensional transition images according to the first type of perspective. decreasing, from one transition image to the next, the opening angle a1 ;, and increasing, from one transition image to the next, the observation distance z, so that the length A1, the area represented by each transition image remains within a predefined bounded range around the length Al of the area represented by the initial synthesis image. The initial computer-generated image can itself be considered as a transition image. The decrease of the opening angle a1; from one transition image to another makes it possible to produce a fluid transition between the synthesis image according to the first type of perspective and the synthesis image according to the second type of perspective. In particular, the visual deviation between a three-dimensional image according to the first type of perspective seen with an angle of aperture al, very low, for example 5 °, and the image according to the second type of corresponding perspective is almost imperceptible. On the other hand, increasing the viewing distance Z makes it possible to maintain a region length represented by the transition images substantially identical to the length of the zone represented by the initial synthesis image and therefore contributes to providing a fluid transition between the initial synthesis image according to the first type of perspective and the final synthesis image according to the second type of perspective. The module 36 is thus configured to determine, for each transition image, the intermediate aperture angle a1, and the intermediate observation distance Z1 of this transition image. The aperture angle α11 of each transition image is smaller than the aperture angle α1 of the initial synthesis image and the intermediate aperture angle of any previous transition image. The aperture angle α1 of each transition image is thus a decreasing function of the transition time at which this transition image is intended to be displayed. By "decreasing function" is meant a non-constant decreasing function, that is to say such that there exists at least a first and a second successive transition instants, the second transition instant t being later than the first instant transition angle α 1, such that the intermediate aperture angle α1 of a first transition image to be displayed at the first instant is strictly less than the intermediate aperture angle α1, of a second image of transition intended to be displayed at the second moment of transition. The intermediate aperture angle α1 of each transition image is preferably strictly less than the ap aperture angle of the initial synthesis image and the intermediate aperture angle of any previous transition image. . The opening angle α1 of each transition image is then a strictly decreasing function of the transition time at which this transition image is intended to be displayed. [0052] For example, the opening angle of the initial synthesis image is between 30 ° and 140 °, in particular equal to 90 ° and the intermediate opening angle of the last transition image is less than 10 °. for example between 0.1 ° and 10 °, for example substantially equal to 5 °. The intermediate observation distance Z of each transition image is greater than the observation distance Z of the initial synthesis image and the intermediate observation distance of any previous transition image. The intermediate observation distance z of each transition image is thus an increasing function of the transition time at which this transition image is intended to be displayed. [0053] By "increasing function" is meant a non-constant increasing function, that is to say such that there exists at least a first and a second successive transition instants, the second transition instant ti being subsequent to the first transition time such that the intermediate observation distance Zi_1 of a first transition image intended to be displayed at the first moment is strictly greater than the intermediate observation distance Z, of a second transition image intended to be displayed. at the second moment of transition. The intermediate observation distance Z of each transition image is preferably strictly greater than the observation distance Z of the initial synthesis image 10 and the intermediate observation distance of any previous transition image. The intermediate observation distance Z of each transition image is then a strictly increasing function of the transition time at which this transition image is intended to be displayed. [0054] For example, the observation distance of the initial synthetic image is equal to 100 m, and the observation distance of the last transition image is substantially equal to 1600 km. Preferably, the intermediate observation distance Z of each transition image is a nonlinear increasing function, notably convex, of the transition time at which this transition image is intended to be displayed. In particular, such a convex function makes it possible to make the transition between the initial image and the final image more fluid. FIG. 8 shows an example of a function connecting the transition instant th to the abscissa, to the intermediate observation distance Zi, to the ordinate, the scales on the abscissa and the ordinate being normalized between 0 and 1. Moreover, , the module 36 is preferably configured to determine the intermediate aperture angle α1, of each transition image as a function of the intermediate observation distance z, determined for this transition image. Preferably, the intermediate aperture angle α1; of each transition image 30 is a nonlinear decreasing function of the transition time t, at which this transition image is intended to be displayed. In particular, the intermediate aperture angle α1 of a transition image is determined as a function of the intermediate viewing distance Z, so that the length of the region represented by the transition image is included in a predetermined bounded range around the length A1 of the area represented by the initial synthesis image. For example, the aperture angle α1 of each transition image is determined as a function of the aperture angle α1 of the initial synthesis image, a virtual aperture angle α1 ', and the transition time t, at which the transition image is intended to be displayed. The virtual aperture angle α1 'is such that the length of the zone represented on a virtual image viewed from an observation distance equal to the intermediate observation distance Z, of the transition image and seen with this virtual aperture angle al ', would be equal to the length Al of the area represented by the initial synthesis image. The virtual opening angle al '; is thus equal to: Al al ', = 2 * arctan Preferably, the intermediate aperture angle a11 of each transition image 15 is determined as a weighted average between the aperture angle α1 of the synthesis image initial and the virtual aperture angle al '' whose weighting coefficients vary as a function of the transition time t, at which the transition image is intended to be displayed. In particular, the intermediate aperture angle α1 of each transition image 20 is determined according to a function of the type: α1; = (1- Y) * al + Y * al'i where Y, which varies between 0 and 1, is an increasing function of the transition time t, at which the transition image is intended to be displayed. FIG. 9 shows an example of a function connecting the transition instant t 'to the abscissa, to the coefficient Y, to the ordinate, the abscissa scale being normalized between 0 and 1. According to this example, the coefficient Y has a strictly increasing value between a first moment of transition t1 and a median moment of transition tj at which the coefficient Y takes the value 1 and then remains constant. [0055] Such a determination of the intermediate aperture angle α1 and the observation distance Di of each intermediate image makes it possible to obtain a fluid transition between the initial synthesis image and the final synthesis image. The module 36 is furthermore configured to generate a plurality of transition images between the initial synthesis image and the final synthesis image, each transition image being viewed according to the intermediate aperture angle α1, and the intermediate observation distance z, determined for this transition image. The module 36 is further configured to control the successive display by the display device 14 of these transition images at successive transition instants t ,, and then to control the display by the display device 14 of the image of final synthesis, according to the second type of perspective. Similarly, during a transition between an initial synthesis image according to the second type of perspective and a final synthesis image according to the first type of perspective, the module 36 is configured to generate three-dimensional transition images according to the first type of perspective. type of perspective by increasing, from one transition image to the next, the aperture angle α1, and decreasing, from one transition image to the next, the observation distance Zi, so that the length A11 of the zone represented by each transition image remains within a range bounded around the length Al of the zone represented by the final synthesis image. The final synthesis image may itself be considered a transition image. The gradual increase of the aperture angle α1 from one transition image to another makes it possible to produce a fluid transition between the synthetic image according to the second type of perspective and the synthetic image according to the first type. of perspective. Furthermore, the gradual decrease in the observation distance Z makes it possible to maintain a zone length represented by the transition images substantially identical to the length of the zone intended to be represented by the final synthesis image and therefore contributes to provide a smooth transition between the initial synthesis image and the final synthesis image. The module 36 is thus configured to generate a first transition image viewed at a first intermediate aperture angle al 1 and a first intermediate viewing distance Z1. The first intermediate opening angle is for example less than 10 °, in particular equal to 5 °. The first intermediate observation distance is for example equal to 1600 km. The module 36 is further configured to generate a plurality of additional transition images each viewed at an intermediate aperture angle α1 and an intermediate viewing distance Z1. The intermediate aperture angle α1 of each transition image is greater than the intermediate aperture angle of any previous transition image. [0056] The intermediate aperture angle α1; of each transition image is thus an increasing function of the transition time at which this transition image is intended to be displayed. By "increasing function" is meant a non-constant increasing function, such that there are at least a first and a second successive transition instants, the second transition time t1 being subsequent to the first transition time t1_1, such that the intermediate aperture angle a1; .. 1 of a first transition image to be displayed at the first instant is strictly greater than the intermediate aperture angle a1; a second transition image to be displayed at the second transition time. [0057] The intermediate opening angle ai; each transition image is preferably a strictly increasing function of the transition time at which this transition image is intended to be displayed. The intermediate opening angle ai; each transition image is thus strictly greater than the intermediate aperture angle of any previous transition image. For example, the intermediate aperture angle of the last transition image is substantially equal to 90 °. The intermediate observation distance Z 1 of each transition image is smaller than the intermediate observation distance of any previous transition image. The intermediate observation distance Zi of each transition image is thus a decreasing function of the transition time at which this transition image is intended to be displayed. By "decreasing function" is meant a non-constant decreasing function, such that there are at least a first and a second successive transition instants, the second transition time t1 being later than the first transition time t1_1, such that the intermediate aperture angle a11_1 of a first transition image to be displayed at the first instant is strictly less than the intermediate aperture angle ai; a second transition image to be displayed at the second transition time. The intermediate observation distance Zi of each transition image is preferably strictly less than the intermediate observation distance of any previous transition image. [0058] The intermediate observation distance Z of each transition image is then a strictly decreasing function of the transition time at which this transition image is intended to be displayed. For example, the observation distance of the last transition image is substantially equal to 100 m. Preferably, the intermediate observation distance Z of each transition image is a nonlinear decreasing function, in particular convex, of the transition time at which this transition image is intended to be displayed. For example, the intermediate observation distance Z is determined according to a function symmetrical to that used during a transition between a synthetic image according to the first type of perspective and a synthetic image according to the second type of perspective, such as As illustrated in FIG. 8. Furthermore, the module 36 is preferably configured to determine the intermediate aperture angle α1 of each transition image as a function of the intermediate observation distance Z determined for this image. of transition. Preferably, the intermediate aperture angle α1, of each transition image is a nonlinear increasing function of the transition time t at which this transition image is intended to be displayed. In particular, the intermediate aperture angle α1 of a transition image is determined as a function of the intermediate distance Z, so that the length of the region represented by the transition image is within a range of predetermined bounding around the length Al of the area represented by the final synthesis image. For example, the intermediate aperture angle α1, of each transition image 25 is determined as a function of the aperture angle α1 of the final synthesis image, the virtual aperture angle α1 ', and the transition time t i at which the transition image is intended to be displayed. Preferably, the intermediate aperture angle α1 of each transition image is determined as a weighted average between the aperture angle α1 of the final synthesis image and the virtual aperture angle α1. , whose weighting coefficients vary according to the transition time t, at which the transition image is intended to be displayed. In particular, the intermediate aperture angle a11 of each transition image is determined according to a function of the type: ## EQU1 ## where Y ', which varies between 0 and 1, is a decreasing function of the transition time t i at which the transition image is to be displayed. For example, Y 'is such that: Y' (t,) = Y (t, - ti), where Y is the function defined above, for example as illustrated in Figure 9. [0059] The module 36 is furthermore configured to control the successive display by the display device 14 of these transition images at the successive transition instants t 1 and then to control the display by the visualization device 14 of the synthetic image. final according to the first type of perspective. To generate a synthesis image according to the first perspective, the module 36 associates with each pixel of the three-dimensional environment an attribute of depth, representative of the altitude of this pixel with respect to a horizontal reference plane. Such an attribute makes it possible for the module 36 to display on the computer image only the objects not hidden by other objects. The depth is coded according to a predetermined number of bits, independent of the observation distance. [0060] Such coding could induce a loss of depth coding precision when displaying images viewed from a point of view very far from the central point of interest, and cause visual artifacts, including flashing effects, the module 36 is no longer able to determine which pixel should be displayed on the screen because of this drop in accuracy. Such an effect would be particularly likely to occur during a transition between an image according to the first type of perspective and an image according to the second type of perspective or during a transition between an image according to the second type of perspective and a image according to the first type of perspective. To avoid such an effect, the module 36 is configured to associate a depth attribute only with pixels in a predefined area, especially when the viewing distance is greater than a predetermined viewing distance. This predefined zone is defined as the set of pixels situated at an altitude lower than a predetermined maximum altitude, and preferably greater than a predefined minimum altitude. The maximum altitude is for example equal to 20 km. The minimum altitude is, for example, defined as the altitude of the terrain. Thus, for the pixels in the predefined area, which is the only area in which objects of interest are likely to be located, the depth coding remains accurate enough to avoid the appearance of visual artifacts even when the Observation distance becomes very large, especially when the viewpoint is located at an altitude higher than the maximum altitude. [0061] An exemplary method for displaying information relating to a flight of an aircraft, implemented by means of a visualization system described above, will now be described with reference to FIG. 10. In an initial step 200 , the module 36 generates an initial synthesis image and controls the display of this initial synthesis image on the display device 14, in particular on a window of the touch screen 16 of length Lf and width If. In the example described, the initial synthesis image is an image according to the first type of perspective. [0062] The initial synthesis image is centered on a central point of initial interest Pco, and viewed from an initial point of view Pv located at an initial distance Zo from the initial central point of interest Pco. The initial computer-generated image is, for example, exocentric. In particular, it will be considered later, by way of example, that the initial central point of interest Pco 15 corresponds to the position of the aircraft. The initial synthesis image represents an observation volume substantially corresponding to a pyramid, with an initial horizontal aperture angle a10 and an initial vertical aperture angle a20. The initial horizontal aperture angle a10 is, for example, set to 90 degrees by default, the initial vertical aperture angle a20 then being adapted according to the length and width of the displayed image. The initial vertical angular position ayo can also be set by default, for example to a value of 30 degrees. On the other hand, the initial observation distance Zo between the central point of interest Pco and the point of view Pvo is preferably chosen so that a set of points of interest can be fully included in the pyramid. observation. The initial horizontal angular position aho may also be set by default, for example to a value of 0 degrees. The initial synthesis image comprises a synthetic representation of the environment located in the vicinity of the aircraft trajectory, on which is superimposed a curve representative of a portion of the trajectory of the aircraft in this environment. The initial synthesis image also represents, where appropriate, one or more symbolic objects, for example representative of the position of crossing points, associated or not with constraints, altitude profile points associated with the trajectory 3036476 the aircraft, the position of the aircraft and / or objects that could interfere with the trajectory of the aircraft, for example clouds, storm cells or aircraft. The position of the Pvo point of view associated with the initial horizontal position αs is not necessarily that making it possible to optimally visualize the trajectory of the aircraft. Thus, the module 36 preferably automatically determines an optimum position from the point of view making it possible to optimize the portion of trajectory visualized on the image. In particular, the module determines, during a step 202, an optimized position of the point of view, located at the distance Z0 of the central point of interest Pco, located at the vertical angular position α o and at an optimized horizontal angular position O 0 allowing to maximize the length of the portion of trajectory visualized on the image, the opening angles a1 and a2 remaining fixed. During step 202, the module 36 determines during a phase 204 a set of 15 successive points on the trajectory of the aircraft, denoted Pi, according to a predetermined sampling, from an initial point which corresponds, for example, to the position of the aircraft, preferably in the downstream direction of this trajectory. For example, the points Pi are regularly spaced along the path. The module 36 then adjusts, during a phase 205 or during a plurality of successive phases 205 carried out iteratively, the horizontal angular position, starting from the initial horizontal angular position aho, so as to successively include the points Pi in the observation pyramid, while keeping in the observation pyramid all the points of interest. Thus, during a first phase 205, the module 36 determines a first modified horizontal angular position ahi. For this purpose, the module 36 determines a modified horizontal angular position such that the point P1 is included in the observation pyramid, preferably so that the edge of the observation pyramid closest to the point P1 before modification of the position The initial horizontal angle aho cuts the point P1 when the horizontal angular position is equal to this modified horizontal angular position. If this modified horizontal angular position is not within the allowed angular range predefined for the horizontal angular position, the module 36 chosen as the first modified horizontal angular position ahi the limit of this authorized range closest to the modified angular position thus determined . [0063] If the modified vertical angular position is within the allowed angular range predefined for the vertical angular position, the module 36 is selected as the first modified vertical angular position with this modified angular position. Then, during each subsequent phase 205, the module 36 determines a new modified horizontal angular position. To this end, the module 36 determines during each phase a modified horizontal angular position such that the point Pi is included in the observation pyramid, preferably so that the edge of the observation pyramid closest to the point P, before modifying the modified horizontal angular position ah, _i determined during the previous iteration of the phase 205 intersects the point Pi when the horizontal angular position is equal to this modified horizontal angular position. Similarly, if the modified horizontal angular position is not within the predetermined angular range predefined for the horizontal angular position, the module 36 selects as the new modified horizontal angular position ah, the limit of this authorized domain closest to the determined angular position. If the modified horizontal angular position is within the allowed angular range predefined for the horizontal angular position, the module 36 selects as the new modified horizontal angular position ah, this modified angular position. [0064] In a final phase 205, the module 36 detects that it is not possible to find a horizontal angular position such that the considered point Pi of the trajectory is included in the observation pyramid without any other points. the trajectory or points of all the points of interest do not leave the observation pyramid, and then terminate following iterations. The optimal horizontal angular position ahopt is then selected by the module 36 as the last modified angular position ah, _i determined. The optimal horizontal angular position ahopt is considered as a new initial angular position. The module 36 then determines a new initial point of view Pvo situated at the initial distance Zo of the initial central point of interest Pco, of initial vertical angular position 30, for example equal to 30 degrees, and of initial horizontal angular position aho equal to optimal horizontal angular position ahopt. The module 36 then generates, during a step 206, a new initial synthesis image viewed from the initial Pvo point of view and controls the display by the display device 14. [0065] Several actions of modifications of this initial synthesis image by an operator, as well as the steps implemented by the system 10 as a result of these actions, will now be described successively. To move the central point of interest along the trajectory of the aircraft, an operator selects the second mode of modification of the central point of interest via the man-machine interface 18. Then, when a step 211, the operator implements an action of modification of the central point of interest, this action comprising a displacement of a control member between an initial position and a final position. In the example described, this action comprises a movement of a control member, for example an operator's finger or a stylus, between an initial position and a final position on the touch screen 16. The module 36 detects during a step 212 this modification action, and implements in a plurality of successive instants, during this displacement, a series of steps 15 in order to display in each of these instants a modified centralized synthesis image on a modified point of interest. In particular, in each of the successive instants, the module 36 detects during a step 213 the position of the control member, this position being between the initial position and the final position, and determines during a step 214 a center point 20 modified interest noted Pc, depending on the position of the control member at this time. Each modified central point of interest Pci is located along the curve 44. This step 214 comprises a phase 215 of determination by the module 36, as a function of the vector displacement between the initial position of the control member and its position at the moment considered, a curvilinear distance on the curve representative of the trajectory between the initial central point of interest Pco and the modified central point of interest Pc ,. Preferably, this curvilinear distance is determined as a function of the displacement vector and of a vector tangent to the curve at the initial central point of interest Pco, in particular as a function of a dot product between a projection of the displacement vector on a horizontal plane of the initial synthesis image and this tangent vector. [0066] Step 214 then comprises a phase 216 of determination by the module 36 of the position of the modified central point of interest Pc, on the curve 44 from the position on the curve of the initial central point of interest Pco and of the curvilinear distance determined during phase 215. [0067] Following step 214, the module 36 generates, during a step 217, a modified synthetic image centered around the modified central point of interest Pc ,, and controls the display of this modified synthesis image on the touch screen 16 during a step 218. The following steps 213, 214, 217 and 218 are implemented in a plurality of successive instants at least until the control member reaches its position. final. Thus, during the action of modifying the position of the central point of interest by the operator, the central point of interest remains at each moment along the curve representative of the trajectory of the aircraft, and this whatever the movement performed by the operator. To modify the scale of the synthesis image, that is to say, in the example described, to modify the observation distance Z, an operator implements a modification action during a step 221. of the ladder via the human machine interface 18. [0068] This modification action comprises a displacement of two control members, in particular of two fingers of the operator, on the touch screen 16 in two substantially opposite directions, which is followed in the example described by a maintenance of the two organs. of control on the touch screen 16 at the end of their displacement. During a step 222, the module 36 detects this modification action, in particular detects the positioning of the two organs on the touch screen opposite two distinct initial points P1 and P2, detects the position of these two initial points, and determines an initial distance entre between the initial points. In a step 223, the module 36 determines a midpoint Pm located halfway between these two initial points P1 and P2, as well as a first zone 98, a second zone 100 and a third zone 102. second and third zones are preferably centered on the middle point Pm. As described with reference to FIG. 6, the first, second and third zones 98, 100, 102 are defined by a first closed curve C1 and a second closed curve C2 situated inside the first closed curve C1, the two curves C1 and C2 being preferably centered on the midpoint Pm. The first zone 98, which includes the initial points P1 and P2, is formed by the set of points contained between the first curve C1 and the second curve C2, the second zone 100 is formed by all the points contained in the inside the second curve C2, and the third zone 102 is formed of the points lying outside the curves C1 and C2. [0069] Then, during the scaling action, the module 36 implements, in a plurality of successive instants, during the displacement of the two control members, a series of steps in order to display in the following manner: each of these instants a synthetic image modified on a modified scale. [0070] In particular, at each of these instants, the module 36 determines in a step 224 the position of the two control members, and then determines, during a step 225, a modification factor of the scale y, as a function of this position. In particular, the module 36 determines at each instant noted ti the modification factor of the scale y, as a function of the position of the points opposite which the control members are positioned with respect to the first zone 98. At the instant considered, the control members remain positioned on the touch screen 16 opposite points located inside the first zone 98, the module 36 determines in step 225 the dimensioning factor y, according to the first calculation mode described above 15 According to this first mode, the module 36 determines at time t, the modification factor of the scale y, as a function of the distance d, between the points in front of which are positioned the control members at this instant t, and the distance do between the initial points P1 and P2. Preferably, the modification factor of the scale y; is a strictly decreasing function of the distance of, for example, a linear function of the deviation or ratio between the distance d 0 and the distance d 1. If, on the contrary, at time t, considered, at least one of the control members is positioned opposite a point situated outside the first zone 98, that is to say within the second zone 100 or the third zone 102, the module 36 determines in step 225 the positioning factor y, according to the second calculation mode 25 described above. According to this second method of calculation, the module 36 determines at each instant fi the scale modification factor, denoted y ,, as a function of the duration of holding the control members outside the first zone 98. holding time denoted T, corresponds to the time elapsed between the moment noted t'o which one or two control members have reached the limits of the first zone 98 and time t ', considered. Preferably, according to this second method of calculation, the control members are positioned on the touch screen 16 opposite points outside the first zone 98, the modification factor of the scale yi is independent of the position of the points of the screen located opposite these control organs. [0071] 3036476 Then, in a step 226, the module 36 applies the modification factor of the scale y; or y; determined at the instant considered to the initial synthesis image to determine a modified scale. In particular, in the example described, the module 36 determines a modified observation distance Zi by applying a factor y or yl, at the initial distance Zo, and determines a new point of view located at the distance Z, from the central point interest. During a step 227, the module 36 generates a modified modified scale image thus determined, and controls the display of this modified synthesis image on the touch screen 16 during a step 228. [0072] Thus, during a scaling action, the module 36 determines the scaling factor according to the first calculation mode, i.e., as a function of the distance d, between the points against which the control members are positioned as long as these points remain in the first zone 98, then, as soon as at least one of these points leaves the first zone 78, the module 36 determines the modification factor of the scale according to the second calculation mode, that is to say according to the duration of maintaining the point or points outside the first zone. The following steps 224, 225, 226, 227 and 228 are implemented in a plurality of successive instants at least until the control members are released from the touch screen 16. [0073] As soon as the control members are no longer disposed on the touch screen 16, the action of modifying the dimensions of the area displayed by the synthesis image stops. Preferably, in a step 228, the module 36 compares the dimensions Al n and AZ, or the distance Zr, associated with the last modified image generated at thresholds of predetermined size or distance, and determines the size thresholds. , respectively the distance threshold, the closest to the dimensions Aln and A2n or the distance Zn. The module 36 then generates a final modified image representing a zone whose dimensions correspond to the nearest determined dimension thresholds and / or a distance Z equal to the determined distance threshold, and controls the display on the touch screen 16. To move from the initial synthesis image according to the first type of perspective to a computer image according to the second type of perspective, for example top view, the operator performs in a step 231 an action of modification by means of the man-machine interface 18, for example by actuating a dedicated icon superimposed on the computer image by the module 36. In a step 232, the module 36 detects this modification action, then generates during a plurality of successive steps 233, a plurality of successive transition synthesis images 5 between the initial synthesis image according to the first type of perspective and the synthesis image according to the second type of perspective. The transition images are intended to be displayed on the display device 14 in a plurality of successive transition instants t between an initial instant of display of the initial synthesis image and a final display instant of the image. synthesis image according to the second type of perspective. Each transition image generated during a step 233 is an image according to the first type of perspective. Each transition image generated during a step 233 is centered around a central point of intermediate interest Pc 1, is viewed from an intermediate point of view Pv ,, located at an intermediate observation distance 4, and is viewed at an intermediate horizontal aperture angle a1, and an intermediate vertical aperture angle a2 ,. Each transition image represents an area of the intermediate length environment A1, and an intermediate width A2, the ratio between the intermediate length A1, and the intermediate width A2, remaining constant and equal to the ratio between the length A1 and the width A2 20 of the three-dimensional synthesis image. As indicated above, the horizontal aperture angles al, and vertical a2, intermediate being related to each other as a function of the ratio between the intermediate length A1, and the intermediate width A2, which remains constant, will be designated subsequently generally "opening angle" one or the other of these opening angles, for example the intermediate horizontal aperture angle 25,. Each step 233 comprises a phase 235 of determination by the module 36 of the intermediate opening angle α1, and the intermediate observation distance Z1 of the transition image intended to be displayed at the transition time t, associated with this step. [0074] As explained above, the aperture angle α1 of each transition image is thus a decreasing function, preferably a strictly decreasing function, of the transition time t at which this transition image is intended to be displayed. and the intermediate observation distance Z of each transition image is an increasing, preferably strictly increasing, function of the transition time at which this transition image is to be displayed. [0075] Preferably, the intermediate observation distance Z of the transition image is determined during each phase 235 according to a nonlinear increasing function, in particular convex, of the transition time at which this transition image is intended to displayed, as shown in Figure 7. [0076] Moreover, during each phase 235, the module 36 determines the intermediate aperture angle α1 of the transition image as a function of the intermediate observation distance Z1 determined for this transition image, according to a decreasing function. non-linear transition time t, at which this transition image is intended to be displayed. [0077] In particular, the intermediate aperture angle α1 of a transition image is determined as a function of the intermediate viewing distance Z, so that the length of the area represented by the transition image is included. in a predetermined bounded range around the length Al 0 of the area represented by the initial three-dimensional synthesis image. [0078] For example, the intermediate aperture angle α1 of each transition image is determined as a function of the aperture angle α1,3 of the initial three-dimensional synthesis image, the virtual aperture angle α1, ', defined above, and the transition time t, at which the transition image is intended to be displayed. Preferably, during each phase 235, the intermediate aperture angle α1 of the transition image is determined as a weighted average between the aperture angle α1 of the initial three-dimensional synthesis image and the virtual aperture angle α ', whose weighting coefficients vary as a function of the transition time t, at which the transition image is intended to be displayed. In particular, the intermediate aperture angle α1 is determined in each phase 235 according to a function of the type: ## EQU1 ## Each phase 235 is followed by a phase 236 for generating a transition image viewed according to the intermediate aperture angle α1, and the intermediate observation distance Z determined for this transition image during the phase 235. [0079] Each step 233 for generating a transition image is followed by a step 238 of control by the module 36 of the display of this transition image by the display device 14 at the moment of transition t, associated with this transition image. The progressive increase of the aperture angle α1 from one transition image to another makes it possible to produce a fluid transition between the synthetic image according to the first type of perspective and the synthetic image according to the second type of 3036476 58 perspective. Furthermore, the gradual decrease in the observation distance Z makes it possible to preserve a length of zone represented by the transition images substantially identical to the length of the zone intended to be represented by the synthetic image according to the second type of perspective and thus contributes to providing a fluid transition between the initial synthesis image according to the first type of perspective and the final synthesis image according to the second type of perspective. Then, as a result of all the successive steps 233 and 238, the module 36 generates, during a step 239, a synthetic image according to the second type of perspective and controls the display by the display device 14 during A step 240. Preferably, the synthesis image according to the second type of perspective, as well as the transition images, are centered on the same central point of interest as the initial synthesis image t according to the first type. of perspective. In addition, the length and width of the area represented by the final image are substantially equal to the length and width of the area represented by the initial image. Similarly, in order to pass from the initial synthesis image according to the second type of perspective to a final synthesis image according to the first type of perspective, the operator performs, during a step 241, a modifying action using the first type of perspective. the man-machine interface 18, for example by actuating a dedicated icon 20 superimposed on the computer image by the module 36. In a step 242, the module 36 detects this modification action, then generates, in a plurality successive steps, a plurality of successive transition synthesis images between the initial synthesis image and the final synthesis image. The transition images are intended to be displayed on the display device 14 at a plurality of successive transition times t between an initial time of display of a first transition image and a final display instant of the final computer-generated image according to the first type of perspective. In a first step 243, the module 36 generates the first transition image. In this step 243, the module 36 determines a first low intermediate opening angle α1 and a first high intermediate observation distance Z. The first intermediate opening angle is for example equal to 5 °. The first intermediate observation distance is for example equal to 1600 km. Then, the module 36 generates the first transition image. The first transition image is centered around a central point of intermediate interest Pc1, is seen from an intermediate point of view Pvi, located at the intermediate observation distance Z1 of the central point of interest Pc1 . The first transition image is furthermore viewed according to the intermediate horizontal aperture angle α1 and an associated intermediate vertical aperture angle α21. Then, during a plurality of successive steps 244, the module 36 generates a plurality of successive transition synthesis images between the first transition image and the final three-dimensional synthesis image. Each transition image generated during a step 244 is an image according to the first type of perspective. Each transition image generated in a step 244 is centered about a central point of intermediate interest Pc ,, is viewed from an intermediate point of view Pv ,, located at an intermediate observation distance Z ,, and is viewed at an intermediate aperture angle α Each step 244 includes a phase 245 for determination by the module 36 of the intermediate aperture angle α1, and the intermediate observation distance Z, of the image of transition intended to be displayed at the transition time ti associated with this step. As explained above, the aperture angle α1 of each transition image is an increasing, preferably strictly increasing, function of the transition time t, at which this transition image is intended to be displayed, and the intermediate observation distance Z1 of each transition image is a decreasing function, preferably a strictly decreasing function, of the transition time t, at which this transition image is intended to be displayed, so that the area displayed by the successive transition images remains substantially the same. Preferably, the intermediate observation distance Z1 of the transition image 25 is determined during each phase 245 according to a nonlinear decreasing function, in particular a convex function, of the transition time at which this transition image is intended to be displayed. as shown in FIG. 7. Furthermore, during each phase 245, the module 36 determines the intermediate aperture angle α1 of the transition image as a function of the intermediate observation distance Z, determined for this transition image, according to a nonlinear increasing function of the transition time t, at which this transition image is intended to be displayed. In particular, the intermediate aperture angle α1 of a transition image is determined as a function of the intermediate distance Z1 so that the length of the area represented by the transition image is within a bounded range. Predetermined length around the length of the area to be represented by the final synthesis image, which is substantially equal to the length of the area represented by the initial synthetic image and to the length of the area represented by the first For example, the intermediate aperture angle α1 of each transition image is determined as a function of the aperture angle α1 of the final synthesis image, the virtual aperture angle a'1, defined above, and the transition time t, at which the transition image is intended to be displayed. For example, the intermediate aperture angle ai of each transition image 10 is determined, during the phase 245 as a function of the aperture angle α1 of the final synthesis image, of the angle α virtual opening al ', and the transition time t, at which the transition image is intended to be displayed, as indicated above. Each phase 245 is followed by a phase 246 for generating a transition image viewed according to the intermediate aperture angle α1 and the intermediate observation distance Z determined for this transition image during the phase 245. Each of the steps 243 and 244 is followed by a step 248 of the module 36 controlling the display of this transition image by the display device 14 at the transition time t associated with this transition image. The last transition image corresponds to the final image. [0080] It will be understood that the embodiments described above are not limiting. In particular, according to one variant, the tactile control device is dissociated from the display device 14. For example, the touch control device is a track pad. [0081] According to one variant, the man-machine interface comprises, in replacement or in addition to the tactile control device, one or more control members, for example a mouse or a joystick, and / or a keyboard, a rotator ... For example, an action of modifying the central point of interest position of the first or second type described above may consist of a displacement of an object such as a cursor on the displayed synthesis image, at the by means of a controller, up to the icon 80 or any position on the computer image, for example followed by a key operation of a keyboard or a button. An action of modifying the central point of interest position of the third type described above can moreover consist in a displacement of an object such as a cursor on the displayed synthesis image, by means of a control member, while holding a button or a button. Moreover, the generated and displayed computer-generated images do not necessarily reflect the environment of the aircraft and its position in real time. In particular, these computer-generated images may correspond to a simulation of the flight of the aircraft or a particular phase of the flight of the aircraft and be displayed before, during or after this flight or this phase. For example, computer images illustrating an approach phase of the aircraft can be displayed during the flight of the aircraft, before this approach phase.
权利要求:
Claims (14) [0001] CLAIMS1.- A system for displaying information relating to a flight of an aircraft, said system comprising: - a display device (14); a module (36) for dynamic generation of synthetic images, configured to generate synthetic images, each synthetic image comprising a synthetic representation of the environment located in the vicinity of a trajectory of the aircraft and a curve ( 44) representative of a trajectory of the aircraft, said curve (44) being superimposed on said synthetic representation, said generation module (36) being configured to generate a first synthesis image centered around a first central point of interest (Pco) and for controlling the display, on said display device (14), of said first synthesis image, - a man-machine interface (18), - said generation module (36) being configured to detect an action for modifying the central point of interest by an operator via said man-machine interface (18), said system being characterized in that said generation module (36) is further configured: - to determine, according to said modifying action, a second central point of interest (Pcb Pcn), located along said curve (44), said second central point of interest (Pc ,, Pcn) being located along said curve (44) regardless of said modifying action, - to generate a second synthesis image centered around said second central point of interest (Pc ,, Pcn), and - to control the displaying, on said display device (14), said second synthesis image. [0002] 2.- System according to claim 1, characterized in that said modifying action of the central point of interest comprises a movement of a member by the operator between a first position and a second position. [0003] 3. System according to claim 2, characterized in that said first central point of interest (Pco) is located along said curve (44), and in that said modification action of the central point of interest comprises a moving an organ by the operator between a first position and at least a second position in a direction not parallel to the tangent to said curve (44) to said first central point of interest (Pco). [0004] 4. A system according to any one of claims 2 or 3, characterized in that said first central point of interest is located along said curve (44), and that said generation module (36) is configured to: - determine, as a function of a displacement vector between said first position and said second position, a curvilinear distance on said curve (44) between said first central point of interest (Pco) and said second central point of interest (Pc ,, Pcn), and for determining, from a position on the curve (44) of said first central point of interest (Pco) and said curvilinear distance, a position on the curve (44) of said second central point of interest (Pc ,, Pcn). [0005] The system of claim 4, characterized in that said generation module (36) is configured to determine said curvilinear distance as a function of said displacement vector and of a vector tangent to said curve (44) at said first center point interest (Pco), in particular as a function of a dot product between a projection of said displacement vector on a horizontal plane of said first synthesis image and said tangent vector. [0006] 6. System according to any one of claims 1 to 5, characterized in that said synthetic images are three-dimensional images, said synthetic representation of the environment being a three-dimensional representation and said curve (44) being a three-dimensional curve . [0007] 7. System according to any one of claims 1 to 6, characterized in that said first synthesis image is viewed from a first point of view (Pvo), and in that said module (36) is configured to detect an action of rotation of the position of said point of view with respect to said first central point of interest (Pco) in a vertical plane, respectively in a horizontal plane, said rotation action comprising a displacement of a member by an operator according to a vertical direction, respectively in a horizontal direction, said generator module (36) being further configured to determine, according to said rotation action, a modified viewpoint, to generate a modified synthetic image viewed from said point of view. modified view, and for controlling the display on said display device (14) of said modified synthetic image. [0008] 8. A system according to claim 7, characterized in that said generation module (36) is configured to display on said first synthetic image a vertical slide (82) and / or a horizontal slide (84), and said rotational action comprises a movement of a member by an operator on said vertical slide (82), respectively on said horizontal slide (84), in a vertical direction, respectively in a horizontal direction. 5 [0009] 9. A method for displaying information relating to a flight of an aircraft, said method being characterized in that it comprises the following successive steps - display (200, 206), on a display device (14), a first synthetic image comprising a synthetic representation of the environment located in the vicinity of a trajectory of the aircraft and a curve (44) representative of a trajectory of the aircraft, said curve (44) being superimposed on said synthetic representation, said first synthetic image being centered around a first central point of interest (Pco), - detection (212) of an action of modification (211) of the central point of interest by an operator by a intermediate of a man-machine interface (18), - determination (214), as a function of said modifying action, of a second central point of interest (Pc ,, Pcn) situated along said curve; generation (217) of a second imag e of synthesis centered around said second central point of interest (Pc ,, PCn), - display (218), on said display device, of said second synthesis image. [0010] 10. A method according to claim 9, characterized in that said modifying action of the central point of interest (211) comprises a movement of a member by an operator between a first position and a second position. [0011] 11. A method according to claim 10, characterized in that said first central point of interest (Pco) is located along said curve (44), and in that said action of modifying the central point of interest ( 211) comprises a movement of a member by an operator between a first position and a second position in a direction not parallel to the tangent to said curve (44) to said first central point of interest (Pco). [0012] 12. A method according to any one of claims 10 or 11, characterized in that said modifying action of the central point of interest (211) comprises a movement of said member by the operator between said first position and said second position. on a touch screen (16). [0013] 13. A process according to any one of claims 10 to 12, characterized in that said first central point of interest (Pco) is located along said curve (44), and in that step ( 214) for determining said second central point of interest (Pc ,, Pcn) comprises: - a determination phase (215), as a function of a displacement vector between said first position and said second position, of a curvilinear distance on said curve between said first central point of interest and said second central point of interest, and - a determination phase (216), from a position on the curve of said first central point of interest (Pco) and said curvilinear distance, said second central point of interest. [0014] 14. A method according to claim 13, characterized in that said curvilinear distance is determined according to said displacement vector and a vector tangent to said curve (44) to said first central point of interest (Pco), in particular according to a scalar product between a projection of said displacement vector on a horizontal plane of said first synthesis image and said tangent vector.
类似技术:
公开号 | 公开日 | 专利标题 EP1460384B1|2012-07-11|Method and apparatus for constructing a synthetic image of the environment of an aircraft and displaying the image on a display of this aircraft CA2523717C|2013-10-15|Procedure and program for reconstruction of fracture planes FR3036476A1|2016-11-25|AIRCRAFT FLIGHT INFORMATION VISUALIZATION SYSTEM AND ASSOCIATED METHOD FR2988510A1|2013-09-27|METHOD AND DEVICE FOR DISPLAYING METEOROLOGICAL INFORMATION ON AN AIRCRAFT SCREEN FR2798759A1|2001-03-23|SYSTEM AND METHOD FOR VOLUME RENDERED SEGMENTATION FR2996670A1|2014-04-11|AIRCRAFT VISUALIZATION SYSTEM AND METHOD OF VISUALIZATION THEREOF FR2935792A1|2010-03-12|VISUALIZATION DEVICE FOR AIRCRAFT COMPRISING RADIONAVIGATION TAG DISPLAY MEANS AND ASSOCIATED METHOD EP2717229B1|2018-02-28|Visualisation system for aircraft approaching a landing zone and corresponding method FR3036511A1|2016-11-25|AIRCRAFT FLIGHT INFORMATION VISUALIZATION SYSTEM AND ASSOCIATED METHOD FR3028968A1|2016-05-27|GRAPHICAL INTERFACE AND METHOD FOR MANAGING THE GRAPHICAL INTERFACE WHEN TACTILE SELECTING A DISPLAYED ELEMENT CA2949423A1|2017-06-08|Display system for information relative to the flight of an aircraft and associated process FR3016448A1|2015-07-17|AIRCRAFT INFORMATION DISPLAY SYSTEM AND ASSOCIATED METHOD EP1645888B1|2008-12-31|Device for assisting the steering of an all terrain vehicle EP3029420A1|2016-06-08|Synthetic display system comprising means for adapting the displayed landscape FR3065545A1|2018-10-26|METHOD FOR DETECTING A USER SIGNAL FOR GENERATING AT LEAST ONE INSTRUCTION FOR CONTROLLING AN AIRCRAFT AVIONAL EQUIPMENT, COMPUTER PROGRAM AND ELECTRONIC DEVICE THEREFOR FR2979986A1|2013-03-15|METHOD AND DEVICE FOR ADJUSTING THE CARTOGRAPHIC REPRESENTATION OF A VEHICLE INFORMATION SYSTEM FR3075433B1|2019-11-15|METHOD FOR DETERMINING THE HIGHLIGHTS OF A TARGET ON AN IMAGE CA2925241C|2021-08-10|Operator terminal with display of zones of picture taking quality FR3037645A1|2016-12-23|VISUALIZATION SYSTEM COMPRISING A GRAPHICAL REPRESENTATION OF THE DISTANCES IN A PERSPECTIVE MAP VIEW AND ASSOCIATED METHOD BR102016011213B1|2021-12-14|SYSTEM AND METHOD FOR DISPLAYING INFORMATION RELATED TO AN AIRCRAFT FLIGHT WO2015165837A1|2015-11-05|Head-mounted display system comprising heading selection means and associated selection method EP3018450A1|2016-05-11|Method for representing a cartographic image in a geopositioned viewing system considering the geopositioning accuracy WO2016166138A1|2016-10-20|Method for managing and displaying georeferenced graphic symbols and associated display device FR3113330A1|2022-02-11|Method for aligning at least two images formed from three-dimensional points FR3054686A1|2018-02-02|METHOD OF GRAPHIC MANAGEMENT OF SYMBOLOGY IN A THREE DIMENSIONAL SYNTHETIC VIEW OF THE OUTSIDE LANDSCAPE IN AN AIRCRAFT VISUALIZATION SYSTEM
同族专利:
公开号 | 公开日 US10339820B2|2019-07-02| BR102016011213A8|2021-09-21| BR102016011213A2|2017-01-24| US20160343261A1|2016-11-24| CA2929162A1|2016-11-19| FR3036476B1|2018-06-15|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题 US20060005147A1|2004-06-30|2006-01-05|Hammack Jason L|Methods and systems for controlling the display of maps aboard an aircraft| FR2943777A1|2009-03-27|2010-10-01|Thales Sa|FLIGHT PLAN DISPLAY DEVICE WITH HOP MOVEMENTS| US6720949B1|1997-08-22|2004-04-13|Timothy R. Pryor|Man machine interfaces and applications| FR2852097B1|2003-03-07|2005-05-06|METHOD AND DEVICE FOR CONSTRUCTING AN ENVIRONMENTAL SYNTHESIS IMAGE OF AN AIRCRAFT AND PRESENTING IT ON A SCREEN OF SAID AIRCRAFT| FR2910680B1|2006-12-21|2009-01-30|Eurocopter France|METHOD AND SYSTEM FOR PROCESSING AND VISUALIZING IMAGES OF THE ENVIRONMENT OF AN AIRCRAFT| FR2932306B1|2008-06-10|2010-08-20|Thales Sa|METHOD AND DEVICE FOR AIDING NAVIGATION FOR AN AIRCRAFT WITH RESPECT TO OBSTACLES| US20090319100A1|2008-06-20|2009-12-24|Honeywell International Inc.|Systems and methods for defining and rendering a trajectory| US9851219B2|2009-07-09|2017-12-26|Honeywell International Inc.|Methods and systems for route-based scrolling of a navigational map| US8412392B2|2010-02-24|2013-04-02|Honeywell International Inc.|Methods and systems for displaying predicted downpath parameters in a vertical profile display| FR2988510B1|2012-03-20|2015-07-24|Airbus Operations Sas|METHOD AND DEVICE FOR DISPLAYING METEOROLOGICAL INFORMATION ON AN AIRCRAFT SCREEN|US10389396B1|2011-11-17|2019-08-20|Michael L. Gibbons|Terrain awareness and warning aircraft indicator equipment| US10540898B2|2017-07-21|2020-01-21|General Electric Company|Decision support system for air missioncommander dynamic mission re-planning| US10247821B1|2017-08-04|2019-04-02|Rockwell Collins, Inc.|Panoramic weather radar display system for aircraft|
法律状态:
2016-04-28| PLFP| Fee payment|Year of fee payment: 2 | 2016-11-25| PLSC| Publication of the preliminary search report|Effective date: 20161125 | 2017-05-17| PLFP| Fee payment|Year of fee payment: 3 | 2018-04-27| PLFP| Fee payment|Year of fee payment: 4 | 2019-04-30| PLFP| Fee payment|Year of fee payment: 5 | 2020-04-21| PLFP| Fee payment|Year of fee payment: 6 | 2021-04-14| PLFP| Fee payment|Year of fee payment: 7 |
优先权:
[返回顶部]
申请号 | 申请日 | 专利标题 FR1501021A|FR3036476B1|2015-05-19|2015-05-19|AIRCRAFT FLIGHT INFORMATION VISUALIZATION SYSTEM AND ASSOCIATED METHOD| FR1501021|2015-05-19|FR1501021A| FR3036476B1|2015-05-19|2015-05-19|AIRCRAFT FLIGHT INFORMATION VISUALIZATION SYSTEM AND ASSOCIATED METHOD| CA2929162A| CA2929162A1|2015-05-19|2016-05-04|System for displaying information related to a flight of an aircraft and associated method| BR102016011213-3A| BR102016011213B1|2015-05-19|2016-05-18|SYSTEM AND METHOD FOR DISPLAYING INFORMATION RELATED TO AN AIRCRAFT FLIGHT| US15/158,807| US10339820B2|2015-05-19|2016-05-19|System for displaying information related to a flight of an aircraft and associated method| 相关专利
Sulfonates, polymers, resist compositions and patterning process
Washing machine
Washing machine
Device for fixture finishing and tension adjusting of membrane
Structure for Equipping Band in a Plane Cathode Ray Tube
Process for preparation of 7 alpha-carboxyl 9, 11-epoxy steroids and intermediates useful therein an
国家/地区
|